Peter BabičZola2024-03-04T00:00:00+00:00https://peterbabic.dev/atom.xmlHow to find serial number on Casio fx-991CE X2024-03-03T00:00:00+00:002024-03-03T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/how-to-find-serial-number-on-casio-fx-991cex/<p>I have just bought myself a new scientific calculator, a Casio fx-991CE X
CLASSWIZ series, which is a twin of fx-991EX adapted for central Europe.
This new inventory items serves as a replacement for my old 991ES, which
got slight damage on the LCD display. It is still almost perfectly usable
but sometimes a little bit harder to read on it's right side, which is
getting more and more irritating.</p>
<p>During my studies at university, I had many many semesters that made a
direct use of the 991ES. Almost every class I attended got some part which
could use some function of the endless array of functions it provides. Be
it mathematics, which I had total of 13 classes, electro and electronics,
physics or even material sciences. Thus I made sure I know and understand
it's functions very well. I was carrying the printed manual around and
studies it when I had some time in the class.</p>
<p>Now the motivation to chose the successor to 991ES was to be something that
is very similar, but newer. I was able to pick up a few new functions 991CE
X has over 991ES in under an hour. Most notably the <code>[FACT]</code> function,
which does split numbers to their respective factors (conditions apply) and
a mini spreadsheets editor, which is pretty impressive, given the fact this
is a calculator.</p>
<p>When it arrived, I wanted to note it's serial number into the list of
equipment I curate. And hereby lies the problem - I could not find it! It
was not on any sticker on the box, it was not on a sticker on a calculator
itself, it was nowhere to be found in the
<a href="https://www.reddit.com/r/calculators/comments/vx2mjj/comment/iftvrhj/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button">battery compartment</a>,
nor it was in its bowels when disassembled.</p>
<h2 id="qr-code-function">QR Code function</h2>
<p>Apart from being much faster and having four line display, instead of two
line which is the case for 991ES, the 991CE X has another new trick up it's
sleeve: QR code generator.</p>
<p>The QR code generator is primarily intended to share results, formulas and
even spreadsheets with the outer world. The request created by QR code
first goes to casio.com, specifically wes.casio.com, and then get's
redirected to classpad.net, an external service operated by Casio. This is
a correct procedure when creating links to somewhere - always first point
the link to something you are sure will be controlling in the future, an
only then redirect somewhere else. Should Casio at one point sell or shut
down the classpad.net, they would lose control if they pointed the links
straight to it right away.</p>
<p>Anyway, by searching around for information about how to dig out the serial
number of the calculator, I firs stumbled on an official
<a href="https://edu.casio.com/en/authenticity_check_system2/">authenticity check</a>,
which confirms if your device is original and genuine and not a knock-off.
Here's how it works: you press <code>[MENU]</code> and then <code>[SHIFT] + [QR]</code>. This
creates a request that again point to wes.casio.com and looks like this:</p>
<p><code>https://wes.casio.com/math/index.php?q=I-QQQQ+U-XXXXXXXXXXXX+M-YYYYYYYYYY+S-ZZZZZ</code></p>
<p>All the segments marked by Q, X, Y and Z letters are alphanumerical and all
except the Y segment were hexadecimal. I thought that maybe the serial
number is one of these parameters but I could not tell for sure. I thus
marked this link as a "serial number" into my list and went on with my
life.</p>
<h2 id="youtube-algorithm-to-the-rescue">YouTube algorithm to the rescue</h2>
<p>The next day I was looking for some other information on YouTube when
suddenly between my suggestions there was a video named
<a href="https://www.youtube.com/watch?v=a5d0L-oAHp4">Casio calculator fx-991EX hidden diagnostic test mode functions</a>.
I must admit I have never know about this diagnostic test for Casio
calculators. Since it is "hidden", it was obviously not in the manual and
it never occurred to me to search for such feature back then.</p>
<p>I was curious, the video was around 2 minutes long, so a took a change and
clicked it. Oh boy, it was a golden nugget hidden in a plain sight. The
video starts with the basic display and keyboard diagnostic test initiated
via pressing <code>7 + [SHIFT] + [ON]</code> together. Now pressing <code>8</code> initiates
screen containing the following table:</p>
<table><thead><tr><th></th><th></th><th></th><th></th><th></th><th></th></tr></thead><tbody>
<tr><td>[1]</td><td>KI1</td><td>K01</td><td>[SHIFT]</td><td>KI8</td><td>K01</td></tr>
<tr><td>[5]</td><td>KI2</td><td>K02</td><td>[9]</td><td>KI3</td><td>K03</td></tr>
<tr><td>[)]</td><td>KI4</td><td>K04</td><td>[log]</td><td>KI6</td><td>K05</td></tr>
<tr><td>[logab]</td><td>KI7</td><td>K06</td><td>[0]</td><td>KI5</td><td>K07</td></tr>
</tbody></table>
<p>By pressing the corresponding keys you can match all of them and after it
the calculator displays the message <strong>Solar MODEL OK!</strong>. By pressing <code>[AC]</code>
now the calculator returns to a normal operator. This is not terribly
interesting. However the video goes on.</p>
<h2 id="diagnostic-modes">Diagnostic modes</h2>
<p>As it turns out, the 991EX or 991CE X, has at least two diagnostic modes.
Another one is initiated by pressing <code>7 + [SHIFT] + [ON]</code> but then
continuing via pressing <code>9</code> this time.</p>
<p>This time the display prints out the <code>8.8E15</code>. By pressing <code>[SHIFT]</code>
multiple times, it then lands on the following:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>CY-QQQF VerF
</span><span>
</span><span>
</span><span>Press AC
</span></code></pre>
<p>Interestingly, the QQQ segment matches the number from the link to the
authenticity check. Now, as seen in the
<a href="https://youtu.be/a5d0L-oAHp4?si=JfpM8fyv_mBuhoGJ&t=97">video at the <code>1:37</code> mark</a>,
by pressing <code>[MENU]</code> the following screen can be seen:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>CY-QQQF VerF
</span><span>SUM AAAA OK
</span><span>P00 Read OK
</span><span>Press AC
</span></code></pre>
<p>VerF should be marking the latest version, at the time of writing. Now the
only thing we need is to pres <code>[AC]</code> one more time:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>[Serial number]
</span><span>XXXXXXXXXXXX
</span><span>EXIT:[AC]
</span></code></pre>
<p>Bingo! Curiously, this information is presented in, for a 991CE X, an
unusual three-line display mode. However, this number matches the X segment
from the link. The serial number is sent as is there! Unfortunately, I have
not found what are the data from Y nor Z segments for. The Y is almost all
zeroes, but the Z contains probably some check sum, otherwise it appears to
me they could be validating the authenticity entirely by the serial number
and version alone, which seems unlikely. Especially given the fact that
they are sending these four segments. But the Z segment does not match the
A segment from the previous diagnostic screen, and this one probably is
some check sum, given the word SUM that precedes it.</p>
<h2 id="keyboard-check">Keyboard check</h2>
<p>By pressing the <code>[AC]</code> one more time, the display shows 00 and it is now in
keyboard check mode. This is not a terribly useful mode as it does not
present any additional information to us. I have found out this mode is
very similar, if not identical to the
<a href="https://www.rskey.org/~mwsebastian/selftest/casio_test.htm#fx300es">diagnostic mode for 991ES</a>.
However the check procedure itself if weirdly unintuitive, so for those
playing along at home, it is worth checking out.</p>
<p>Enjoy!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://www.quora.com/What-are-VerA-VerB-VerE-and-VerF-on-a-Casio-scientific-calculator">https://www.quora.com/What-are-VerA-VerB-VerE-and-VerF-on-a-Casio-scientific-calculator</a></li>
<li><a href="https://www.reddit.com/r/calculators/comments/17x917h/casio_991_ex/">https://www.reddit.com/r/calculators/comments/17x917h/casio_991_ex/</a></li>
<li><a href="https://wes.casio.com/manual/cs/fx991cex/gettingstarted.html">https://wes.casio.com/manual/cs/fx991cex/gettingstarted.html</a></li>
</ul>
Faktury-online.com backup as a Github Action2024-02-06T00:00:00+00:002024-02-06T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/faktury-online-backup-as-github-action/<p>The path towards a good-enough software for invoices was quite thorny, I
would say. I have been using from self-hosted
<a href="https://github.com/crater-invoice/crater">Craterapp</a> instance for a year
and a half. Crater worked almost fine. It had no major issue, I would say.
But it had a lot of small-ish issues that bugged me a lot. I mean, it was
livable but, hey, this is administrative work. It is not very fun, usually
boring and tedious. Using software that makes this even more painful, even
though just a little, meant a death by a thousand cuts for me.</p>
<p>I was looking at a solution to this problem for some time. Tried Revolut
Business invoicing platform. That was very limited in features. The only
upside it had was an automatic payment pairing, which is nice, but for low
volume of invoices it was not a deal-breaker. Then I found Paypal had quite
a mature invoicing platform built-in as well and fiddled with it for a bit.
I would have almost settled there, not for the recommendation from a
friend.</p>
<h2 id="enter-faktury-online-com">Enter faktury-online.com</h2>
<p>One year ago I have started using <a href="https://faktury-online.com">https://faktury-online.com</a> to handle my
invoices and a price offers. At first, I was skeptical, I do not even
remember why. But since it is made by slovak team, I instantly fell in
love, because it had all the bureaucratic hurdles of our country solved,
unlike all the aforementioned options, which were either very general or
even optimized the most for US.</p>
<p>As a year passed, I ave found no major obstacle in that software. There was
just a one catch: I had not backup of the data, apart from PDFs that I keep
in electronic form and the accountant keeps in their archive. Disclaimer: I
am not associated in any way with faktury-online.com and I have never
received any sponsorship from them. If you like them, just go using and pay
them, the first year is free.</p>
<p>Back to the backup problem. They offer a backup service, which costs around
twice the amount of base fee for the usage, if I am not mistaken. It is
still in the low tenths for a whole year, so not expensive at all, but they
also offer an API. After considering its abilities, I decided, I'll do
backup myself via API.</p>
<h2 id="backup-via-api">Backup via API</h2>
<p>This recipe needed these ingredients:</p>
<ol>
<li>script, either in PHP or JS that will call the API and grab all the data</li>
<li>way to store the data, preferably as a JSON file</li>
<li>way to call the script either when I do a change or at least
periodically</li>
</ol>
<p>This was a perfect opportunity to seize Github Actions. I have a lot of
free Github Actions minutes just sitting there, and they could be doing
some useful automation. But I had not learned how to use it so far, so I
pushed myself over the weekend to do it.</p>
<p>The result works well for my needs and can be seen in this template
<a href="https://github.com/peterbabic/faktury-online-backup-template">repository</a>.
I am not going to explain here what it does or how it is doing, as it is
really a crude solution, but someone else could benefit from this and save
time. If you are lucky, the README there already contains more information,
on how to use it. If not, please ping me somewhere, I'll try to update.
Enjoy!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://docs.github.com/en/actions/creating-actions/creating-a-javascript-action">https://docs.github.com/en/actions/creating-actions/creating-a-javascript-action</a></li>
<li><a href="https://docs.github.com/en/actions/quickstart">https://docs.github.com/en/actions/quickstart</a></li>
<li><a href="https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions#accessing-your-secrets">https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions#accessing-your-secrets</a></li>
<li><a href="https://docs.github.com/en/actions/using-workflows/manually-running-a-workflow">https://docs.github.com/en/actions/using-workflows/manually-running-a-workflow</a></li>
<li><a href="https://jasonet.co/posts/scheduled-actions/">https://jasonet.co/posts/scheduled-actions/</a></li>
<li><a href="https://joht.github.io/johtizen/build/2022/01/20/github-actions-push-into-repository.html">https://joht.github.io/johtizen/build/2022/01/20/github-actions-push-into-repository.html</a></li>
<li><a href="https://www.faktury-online.com/faktury-online-api/manual">https://www.faktury-online.com/faktury-online-api/manual</a></li>
<li><a href="https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows">https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows</a></li>
</ul>
I replaced my Opel Astra K Navi900 display2024-01-21T00:00:00+00:002024-01-21T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/replaced-opel-astrak-k-navi-900-display/<p>I currently drive an Opel Astra K 2016 and encountered the following issue.
The right side of the display went slightly darker and the display started
flashing vigorously after the car started. The flashing stopped after a few
minutes of ride, but the colors were affected significantly anyway.
Sometimes, the display did not started at all and was all black, but the
digitizer (the touch layer at the front) was still active. I have read
somewhere, that it was possible by seemingly random touches to mistakenly
switch your main steering wheel display to non-latin language, causing
further trouble.</p>
<p>I have found a few guides and videos claiming it was not that hard to
replace. The serial model is LQ080Y5DZ10 or LQ080Y5DZ06. I have not found
what is the difference between the two. For what I know they seem
identical. These are stockpiles available at internet marketplaces for
around 60 EUR including shipment and were made by a company named Sharp and
described issue is common among them after a few years of service.</p>
<p>The consensus is either ask a company in the UK for the replacement and
receive a brand new, now made by LG, which lasts much longer but also costs
at least 300 EUR or more or replace it every few years with the Sharp
model. I went with the latter.</p>
<p>Indeed taking it out of the car was not hard, requires just a quality pry
tool and undoing two hexagonal screws. I ordered the display without the
digitizer attached, because my digitizer was not broken. Separating old
flashing display with the perfectly working digitizer proved far harder
with my skill than I thought. I mean, separation was easy but removing the
residual glue was extremely time-consuming. I was using isopropyl alcohol,
but it does not actually remove the glue, just soften it, leaving tiny
amounts of glue smears everywhere. After cleaning it all off, I then
connected the digitizer and the new display together via strips of
double-sided tape. Links of what I found useful while doing this are below.
Enjoy!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://www.astrakforums.co.uk/threads/astra-k-2017-navi900-intellilink-screen-flickering.7851/">https://www.astrakforums.co.uk/threads/astra-k-2017-navi900-intellilink-screen-flickering.7851/</a></li>
<li><a href="https://www.astrakforums.co.uk/threads/disassembly-and-re-assembly-of-screen-of-navi-900-intellink.6822/">https://www.astrakforums.co.uk/threads/disassembly-and-re-assembly-of-screen-of-navi-900-intellink.6822/</a></li>
<li><a href="https://www.astrakforums.co.uk/threads/display-of-astra-k-sports-tourer-suddenly-behaves-very-strange.8411/">https://www.astrakforums.co.uk/threads/display-of-astra-k-sports-tourer-suddenly-behaves-very-strange.8411/</a></li>
<li><a href="https://www.youtube.com/watch?v=-fQ3bNwuw5Q">https://www.youtube.com/watch?v=-fQ3bNwuw5Q</a></li>
<li><a href="https://www.youtube.com/watch?v=AdT9X9ND9VA">https://www.youtube.com/watch?v=AdT9X9ND9VA</a></li>
<li><a href="https://www.youtube.com/watch?v=MF0nMKj3xSE">https://www.youtube.com/watch?v=MF0nMKj3xSE</a></li>
<li><a href="https://www.club-opel.com/forum-tema/podsviceni-obrazovky-240057?select=240058">https://www.club-opel.com/forum-tema/podsviceni-obrazovky-240057?select=240058</a></li>
</ul>
Post checkout composer install hook2023-09-26T00:00:00+00:002023-09-26T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/post-checkout-composer-install-hook/<p>Sometimes when working on a PHP project where there are a lot of branches
potentially sitting on incompatible packages, it can be a pain to always
remember to manually run <code>composer i</code> or whatever docker alternative or
alias one might be using. After forgetting, there are error messages
appearing cluttering logs and everyone watching while simultaneously
wasting time.</p>
<p>I tried to modify this stack overflow
<a href="https://stackoverflow.com/a/20892987/1972509">answer</a> to see if this could
be comfortably automated via git hooks. Here are the original steps from
the answer for the record. Start by chreating the <code>post-checkout</code> git hook
with executable flag:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">touch</span><span> .git/hooks/post-checkout
</span><span style="color:#bf616a;">chmod</span><span> u+x .git/hooks/post-checkout
</span></code></pre>
<p>Paste this contents there:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#65737e;">#!/bin/bash
</span><span>
</span><span style="color:#96b5b4;">set </span><span style="color:#bf616a;">-e
</span><span>
</span><span style="color:#96b5b4;">printf </span><span>'</span><span style="color:#a3be8c;">\npost-checkout hook\n\n</span><span>'
</span><span>
</span><span style="color:#bf616a;">prevHEAD</span><span>=$</span><span style="color:#bf616a;">1
</span><span style="color:#bf616a;">newHEAD</span><span>=$</span><span style="color:#bf616a;">2
</span><span style="color:#bf616a;">checkoutType</span><span>=$</span><span style="color:#bf616a;">3
</span><span>
</span><span style="color:#96b5b4;">[[ </span><span>$</span><span style="color:#bf616a;">checkoutType </span><span>== 1 </span><span style="color:#96b5b4;">]] </span><span>&& </span><span style="color:#bf616a;">checkoutType</span><span>='</span><span style="color:#a3be8c;">branch</span><span>' ||
</span><span> </span><span style="color:#bf616a;">checkoutType</span><span>='</span><span style="color:#a3be8c;">file</span><span>' ;
</span><span>
</span><span style="color:#96b5b4;">echo </span><span>'</span><span style="color:#a3be8c;">Checkout type: </span><span>'$</span><span style="color:#bf616a;">checkoutType
</span><span style="color:#96b5b4;">echo </span><span>'</span><span style="color:#a3be8c;"> prev HEAD: </span><span>'`</span><span style="color:#bf616a;">git</span><span> name-rev</span><span style="color:#bf616a;"> --name-only </span><span>$</span><span style="color:#bf616a;">prevHEAD</span><span>`
</span><span style="color:#96b5b4;">echo </span><span>'</span><span style="color:#a3be8c;"> new HEAD: </span><span>'`</span><span style="color:#bf616a;">git</span><span> name-rev</span><span style="color:#bf616a;"> --name-only </span><span>$</span><span style="color:#bf616a;">newHEAD</span><span>`
</span></code></pre>
<p>This at least got me on the track. Now let's transform it to actually run
<code>composer install</code> when different branch is checked out. Note that we do
not really need to check if file is checked out or it is actually a branch,
because checking out a file is not that frequent and also running composer
is designed to not cause problems:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#65737e;">#!/bin/bash
</span><span>
</span><span style="color:#96b5b4;">set </span><span style="color:#bf616a;">-e
</span><span>
</span><span style="color:#bf616a;">prevHEAD</span><span>=$</span><span style="color:#bf616a;">1
</span><span style="color:#bf616a;">newHEAD</span><span>=$</span><span style="color:#bf616a;">2
</span><span>
</span><span style="color:#b48ead;">if </span><span style="color:#96b5b4;">[ </span><span>"$</span><span style="color:#bf616a;">newHEAD</span><span>" != "$</span><span style="color:#bf616a;">prevHEAD</span><span>" </span><span style="color:#96b5b4;">]</span><span>; </span><span style="color:#b48ead;">then
</span><span> </span><span style="color:#bf616a;">composer</span><span> i
</span><span style="color:#b48ead;">fi
</span></code></pre>
<p>Now test with <code>git checkout somebranch</code>. Works? Yes. Comfortable? Hell no!
The main problem here is that it blocks your terminal till the composer is
installing. Let's incorporate some advice from
<a href="https://stackoverflow.com/a/17733087/1972509">another answer</a> explaining
how to run the long-running command in the background:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">long-running-command </span><span>>&- </span><span style="color:#d08770;">2</span><span>>&- &
</span></code></pre>
<p>Alternatively, the same can be done by the following, choose whichever
syntax suits you best:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">long-running-command </span><span>>/dev/null </span><span style="color:#d08770;">2</span><span>>&</span><span style="color:#d08770;">1 </span><span>&
</span></code></pre>
<p>The above works by redirecting both <code>stderr</code> and <code>stdout</code> into the grinder
and adding <code>&</code> at the end, putting the command to the background. Here's
the final script:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#65737e;">#!/bin/bash
</span><span>
</span><span style="color:#96b5b4;">set </span><span style="color:#bf616a;">-e
</span><span>
</span><span style="color:#bf616a;">prevHEAD</span><span>=$</span><span style="color:#bf616a;">1
</span><span style="color:#bf616a;">newHEAD</span><span>=$</span><span style="color:#bf616a;">2
</span><span>
</span><span style="color:#b48ead;">if </span><span style="color:#96b5b4;">[ </span><span>"$</span><span style="color:#bf616a;">newHEAD</span><span>" != "$</span><span style="color:#bf616a;">prevHEAD</span><span>" </span><span style="color:#96b5b4;">]</span><span>; </span><span style="color:#b48ead;">then
</span><span> </span><span style="color:#96b5b4;">printf </span><span>"</span><span style="color:#a3be8c;">Post-checkout 'composer install' hook active.\n</span><span>"
</span><span> </span><span style="color:#bf616a;">composer</span><span> i >/dev/null </span><span style="color:#d08770;">2</span><span>>&</span><span style="color:#d08770;">1 </span><span>&
</span><span style="color:#b48ead;">fi
</span></code></pre>
<p>Now use <code>git checkout -</code> to easily change changes back-and-forth and
observe processes. You should be able to see a brief peak in the CPU usage
in the very least after every checkout:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">$</span><span> git checkout -
</span><span style="color:#bf616a;">Switched</span><span> to branch '</span><span style="color:#a3be8c;">somebranch</span><span>'
</span><span style="color:#bf616a;">Post-checkout </span><span>'</span><span style="color:#a3be8c;">composer install</span><span>' hook active.
</span><span style="color:#bf616a;">$</span><span> git checkout -
</span><span style="color:#bf616a;">Switched</span><span> to branch '</span><span style="color:#a3be8c;">master</span><span>'
</span><span style="color:#bf616a;">Your</span><span> branch is up to date with '</span><span style="color:#a3be8c;">origin/master</span><span>'.
</span><span style="color:#bf616a;">Post-checkout </span><span>'</span><span style="color:#a3be8c;">composer install</span><span>' hook active.
</span></code></pre>
<p>Enjoy!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://stackoverflow.com/a/20892987/1972509">https://stackoverflow.com/a/20892987/1972509</a></li>
<li><a href="https://stackoverflow.com/a/17733087/1972509">https://stackoverflow.com/a/17733087/1972509</a></li>
<li><a href="https://git-scm.com/docs/githooks#_post_checkout">https://git-scm.com/docs/githooks#_post_checkout</a></li>
<li><a href="https://getcomposer.org/doc/01-basic-usage.md#installing-from-composer-lock">https://getcomposer.org/doc/01-basic-usage.md#installing-from-composer-lock</a></li>
</ul>
How to install Caddy using ansible2023-09-25T00:00:00+00:002023-09-25T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/how-to-install-caddy-using-ansible/<p>I have spent some time trying to convert the <code>curl | bash</code> magic coming
from official Caddy docs to an ansible playbook. I mean, I have probably
not reduced any risks such as lack of transparency, man-in-the-middle
attacks, malicious payloads or missing verification of authenticity
associated with curl-pipe-bash procedure just rewriting it to ansible.</p>
<p>We are still using a 3rd party repository, albeit a trusted one. Let's say
we do this "the ansible way" for the sake of the exercise. At the time of
writing, the official installation for Caddy on Ubuntu 22.04 current LTS
looks like this:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> apt install</span><span style="color:#bf616a;"> -y</span><span> debian-keyring debian-archive-keyring apt-transport-https
</span><span style="color:#bf616a;">curl -1sLf </span><span>'</span><span style="color:#a3be8c;">https://dl.cloudsmith.io/public/caddy/stable/gpg.key</span><span>' | </span><span style="color:#bf616a;">sudo</span><span> gpg</span><span style="color:#bf616a;"> --dearmor -o</span><span> /usr/share/keyrings/caddy-stable-archive-keyring.gpg
</span><span style="color:#bf616a;">curl -1sLf </span><span>'</span><span style="color:#a3be8c;">https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt</span><span>' | </span><span style="color:#bf616a;">sudo</span><span> tee /etc/apt/sources.list.d/caddy-stable.list
</span><span style="color:#bf616a;">sudo</span><span> apt update
</span><span style="color:#bf616a;">sudo</span><span> apt install caddy
</span></code></pre>
<p>As I am not that proficient in ansible at this point, and I have previously
focused on automating Arch instead of Ubuntu and delved a lot into the
rabbit hole named rootless docker, even such a seemingly trivial task took
me some time to figure.</p>
<h2 id="add-repository-key">Add repository key</h2>
<p>The second line, which serves to add the repository key translates to
following ansible task:</p>
<pre data-lang="yml" style="background-color:#2b303b;color:#c0c5ce;" class="language-yml "><code class="language-yml" data-lang="yml"><span>- </span><span style="color:#bf616a;">name</span><span>: </span><span style="color:#a3be8c;">Add Cloudsmith repository
</span><span> </span><span style="color:#bf616a;">apt_key</span><span>:
</span><span> </span><span style="color:#bf616a;">url</span><span>: "</span><span style="color:#a3be8c;">https://dl.cloudsmith.io/public/caddy/stable/gpg.key</span><span>"
</span><span> </span><span style="color:#bf616a;">state</span><span>: </span><span style="color:#a3be8c;">present
</span></code></pre>
<p>After running this task we can check the result on the target machine via
deprecated <code>apt-key list</code> utility:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>/etc/apt/trusted.gpg
</span><span>--------------------
</span><span>pub rsa4096 2016-04-01 [SC]
</span><span> 6576 0C51 EDEA 2017 CEA2 CA15 155B 6D79 CA56 EA34
</span><span>uid [ unknown] Caddy Web Server <contact@caddyserver.com>
</span><span>sub rsa4096 2020-12-29 [S] [expires: 2025-12-28]
</span></code></pre>
<p>There is a difference against the original script where we simply do not
specify the file location to be
<code>/usr/share/keyrings/caddy-stable-archive-keyring.gpg</code>. The very exact
filename is then referenced in the sources:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">curl</span><span> https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt
</span></code></pre>
<p>Which outputs the following:</p>
<pre data-lang="conf" style="background-color:#2b303b;color:#c0c5ce;" class="language-conf "><code class="language-conf" data-lang="conf"><span style="color:#65737e;"># Source: Caddy
</span><span style="color:#65737e;"># Site: https://github.com/caddyserver/caddy
</span><span style="color:#65737e;"># Repository: Caddy / stable
</span><span style="color:#65737e;"># Description: Fast, multi-platform web server with automatic HTTPS
</span><span>
</span><span style="color:#bf616a;">deb </span><span>[signed-by=/usr/share/keyrings/caddy-stable-archive-keyring.gpg] </span><span style="color:#d08770;">https://dl.cloudsmith.io/public/caddy/stable/deb/debian</span><span> any-version main
</span><span>
</span><span style="color:#bf616a;">deb-src </span><span>[signed-by=/usr/share/keyrings/caddy-stable-archive-keyring.gpg] </span><span style="color:#d08770;">https://dl.cloudsmith.io/public/caddy/stable/deb/debian</span><span> any-version main
</span></code></pre>
<p>Note the <code>signed-by</code> attribute which references that key file from above.
Need to google a little bit more to understand the implications of omitting
the file location altogether, though. Let's take a looks once again on the
third line of the script:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">curl -1sLf </span><span>'</span><span style="color:#a3be8c;">https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt</span><span>' | </span><span style="color:#bf616a;">sudo</span><span> tee /etc/apt/sources.list.d/caddy-stable.list
</span></code></pre>
<p>It specifically instructs that the file should be named
<code>caddy-stable.list</code>. Fortunately, we can specify the
<a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/apt_key_module.html#parameter-file">filename</a>
via the aptly named <code>filename</code> parameter. Note that the <code>.list</code> extension
gets appended automatically.</p>
<h2 id="source-lists">Source lists</h2>
<p>With the freshly gathered knowledge, we can construct the following two
tasks:</p>
<pre data-lang="yml" style="background-color:#2b303b;color:#c0c5ce;" class="language-yml "><code class="language-yml" data-lang="yml"><span>- </span><span style="color:#bf616a;">name</span><span>: </span><span style="color:#a3be8c;">Add Caddy repository to sources list
</span><span> </span><span style="color:#bf616a;">apt_repository</span><span>:
</span><span> </span><span style="color:#bf616a;">repo</span><span>:
</span><span> "</span><span style="color:#a3be8c;">deb https://dl.cloudsmith.io/public/caddy/stable/deb/debian
</span><span style="color:#a3be8c;"> any-version main</span><span>"
</span><span> </span><span style="color:#bf616a;">state</span><span>: </span><span style="color:#a3be8c;">present
</span><span> </span><span style="color:#bf616a;">filename</span><span>: </span><span style="color:#a3be8c;">caddy-stable
</span><span>
</span><span>- </span><span style="color:#bf616a;">name</span><span>: </span><span style="color:#a3be8c;">Add Caddy src repository to sources list
</span><span> </span><span style="color:#bf616a;">apt_repository</span><span>:
</span><span> </span><span style="color:#bf616a;">repo</span><span>:
</span><span> "</span><span style="color:#a3be8c;">deb-src https://dl.cloudsmith.io/public/caddy/stable/deb/debian
</span><span style="color:#a3be8c;"> any-version main</span><span>"
</span><span> </span><span style="color:#bf616a;">state</span><span>: </span><span style="color:#a3be8c;">present
</span><span> </span><span style="color:#bf616a;">filename</span><span>: </span><span style="color:#a3be8c;">caddy-stable
</span></code></pre>
<p>I tried to combine this two tasks into one, hoping that the <code>repo</code> would
take array but it appears to accept
<a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/apt_repository_module.html#parameter-repo">just a string</a>.
Anyway, we can double-check back at the host machine:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">cat</span><span> /etc/apt/sources.list.d/caddy-stable.list
</span></code></pre>
<p>Which quite obviously now outputs the following:</p>
<pre data-lang="conf" style="background-color:#2b303b;color:#c0c5ce;" class="language-conf "><code class="language-conf" data-lang="conf"><span style="color:#bf616a;">deb </span><span style="color:#d08770;">https://dl.cloudsmith.io/public/caddy/stable/deb/debian</span><span> any-version main
</span><span style="color:#bf616a;">deb-src </span><span style="color:#d08770;">https://dl.cloudsmith.io/public/caddy/stable/deb/debian</span><span> any-version main
</span></code></pre>
<p>The final playlist thus looks like this:</p>
<pre data-lang="yml" style="background-color:#2b303b;color:#c0c5ce;" class="language-yml "><code class="language-yml" data-lang="yml"><span>---
</span><span>- </span><span style="color:#bf616a;">name</span><span>: </span><span style="color:#a3be8c;">Install Caddy web server
</span><span> </span><span style="color:#bf616a;">hosts</span><span>: </span><span style="color:#a3be8c;">my_hosts
</span><span> </span><span style="color:#bf616a;">become</span><span>: </span><span style="color:#d08770;">true
</span><span> </span><span style="color:#bf616a;">become_user</span><span>: </span><span style="color:#a3be8c;">root
</span><span>
</span><span> </span><span style="color:#bf616a;">tasks</span><span>:
</span><span> - </span><span style="color:#bf616a;">name</span><span>: </span><span style="color:#a3be8c;">Install required packages
</span><span> </span><span style="color:#bf616a;">apt</span><span>:
</span><span> </span><span style="color:#bf616a;">update_cache</span><span>: </span><span style="color:#d08770;">yes
</span><span> </span><span style="color:#bf616a;">name</span><span>:
</span><span> - </span><span style="color:#a3be8c;">debian-keyring
</span><span> - </span><span style="color:#a3be8c;">debian-archive-keyring
</span><span> - </span><span style="color:#a3be8c;">apt-transport-https
</span><span> </span><span style="color:#bf616a;">state</span><span>: </span><span style="color:#a3be8c;">present
</span><span>
</span><span> - </span><span style="color:#bf616a;">name</span><span>: </span><span style="color:#a3be8c;">Add Cloudsmith repository
</span><span> </span><span style="color:#bf616a;">apt_key</span><span>:
</span><span> </span><span style="color:#bf616a;">url</span><span>: "</span><span style="color:#a3be8c;">https://dl.cloudsmith.io/public/caddy/stable/gpg.key</span><span>"
</span><span> </span><span style="color:#bf616a;">state</span><span>: </span><span style="color:#a3be8c;">present
</span><span>
</span><span> - </span><span style="color:#bf616a;">name</span><span>: </span><span style="color:#a3be8c;">Add Caddy repository to sources list
</span><span> </span><span style="color:#bf616a;">apt_repository</span><span>:
</span><span> </span><span style="color:#bf616a;">repo</span><span>:
</span><span> "</span><span style="color:#a3be8c;">deb https://dl.cloudsmith.io/public/caddy/stable/deb/debian
</span><span style="color:#a3be8c;"> any-version main</span><span>"
</span><span> </span><span style="color:#bf616a;">state</span><span>: </span><span style="color:#a3be8c;">present
</span><span> </span><span style="color:#bf616a;">filename</span><span>: </span><span style="color:#a3be8c;">caddy-stable
</span><span>
</span><span> - </span><span style="color:#bf616a;">name</span><span>: </span><span style="color:#a3be8c;">Add Caddy src repository to sources list
</span><span> </span><span style="color:#bf616a;">apt_repository</span><span>:
</span><span> </span><span style="color:#bf616a;">repo</span><span>:
</span><span> "</span><span style="color:#a3be8c;">deb-src https://dl.cloudsmith.io/public/caddy/stable/deb/debian
</span><span style="color:#a3be8c;"> any-version main</span><span>"
</span><span> </span><span style="color:#bf616a;">state</span><span>: </span><span style="color:#a3be8c;">present
</span><span> </span><span style="color:#bf616a;">filename</span><span>: </span><span style="color:#a3be8c;">caddy-stable
</span><span>
</span><span> - </span><span style="color:#bf616a;">name</span><span>: </span><span style="color:#a3be8c;">Install Caddy
</span><span> </span><span style="color:#bf616a;">apt</span><span>:
</span><span> </span><span style="color:#bf616a;">update_cache</span><span>: </span><span style="color:#d08770;">yes
</span><span> </span><span style="color:#bf616a;">name</span><span>: </span><span style="color:#a3be8c;">caddy
</span><span> </span><span style="color:#bf616a;">state</span><span>: </span><span style="color:#a3be8c;">present
</span><span>
</span><span> - </span><span style="color:#bf616a;">name</span><span>: </span><span style="color:#a3be8c;">Enable and start Caddy service
</span><span> </span><span style="color:#bf616a;">service</span><span>:
</span><span> </span><span style="color:#bf616a;">name</span><span>: </span><span style="color:#a3be8c;">caddy
</span><span> </span><span style="color:#bf616a;">enabled</span><span>: </span><span style="color:#d08770;">yes
</span><span> </span><span style="color:#bf616a;">state</span><span>: </span><span style="color:#a3be8c;">started
</span></code></pre>
<p>Enjoy!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://cloudsmith.com/blog/deploy-packages-from-cloudsmith-repository-with-ansible/">https://cloudsmith.com/blog/deploy-packages-from-cloudsmith-repository-with-ansible/</a></li>
<li><a href="https://caddyserver.com/docs/install#debian-ubuntu-raspbian">https://caddyserver.com/docs/install#debian-ubuntu-raspbian</a></li>
<li><a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/apt_key_module.html">https://docs.ansible.com/ansible/latest/collections/ansible/builtin/apt_key_module.html</a></li>
<li><a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/apt_repository_module.html">https://docs.ansible.com/ansible/latest/collections/ansible/builtin/apt_repository_module.html</a></li>
</ul>
Better autocompletion for Laravel model factories2023-09-24T00:00:00+00:002023-09-24T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/better-autocopletion-for-laravel-model-factories/<p>I use
<a href="https://laravel.com/docs/10.x/eloquent-factories">Laravel model factories</a>
quite extensively. Here's an
<a href="/blog/convenient-relationship-factories-in-laravel-8/">older related post</a>,
in case you are interested. I tend to create a lot of methods inside them
to simplify tests, utilizing
<a href="https://laravel.com/docs/10.x/eloquent-factories#factory-states">Factory States</a>.
The current docs show the following example for using Factory States:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span>
</span><span style="color:#b48ead;">namespace </span><span>Database\Factories;
</span><span>
</span><span style="color:#b48ead;">use </span><span>Illuminate\Database\Eloquent\Factories\</span><span style="color:#ebcb8b;">Factory</span><span>;
</span><span style="color:#b48ead;">use </span><span>App\Models\</span><span style="color:#ebcb8b;">User</span><span>;
</span><span>
</span><span style="color:#b48ead;">class </span><span style="color:#ebcb8b;">UserFactory </span><span style="color:#b48ead;">extends </span><span style="color:#a3be8c;">Factory </span><span style="color:#eff1f5;">{
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">protected </span><span>$</span><span style="color:#bf616a;">model </span><span>= </span><span style="color:#ebcb8b;">User</span><span style="color:#eff1f5;">::</span><span style="color:#d08770;">class</span><span style="color:#eff1f5;">;
</span><span style="color:#eff1f5;">
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">public function </span><span style="color:#8fa1b3;">suspended</span><span style="color:#eff1f5;">(): </span><span style="color:#ebcb8b;">static
</span><span style="color:#eff1f5;"> {
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">return </span><span>$</span><span style="color:#bf616a;">this</span><span style="color:#eff1f5;">-></span><span style="color:#bf616a;">state</span><span style="color:#eff1f5;">(</span><span style="color:#b48ead;">function </span><span style="color:#eff1f5;">(</span><span style="color:#b48ead;">array </span><span>$</span><span style="color:#bf616a;">attributes</span><span style="color:#eff1f5;">) {
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">return </span><span style="color:#eff1f5;">[
</span><span style="color:#eff1f5;"> </span><span>'</span><span style="color:#a3be8c;">account_status</span><span>' => '</span><span style="color:#a3be8c;">suspended</span><span>'</span><span style="color:#eff1f5;">,
</span><span style="color:#eff1f5;"> ];
</span><span style="color:#eff1f5;"> });
</span><span style="color:#eff1f5;"> }
</span><span style="color:#eff1f5;">}
</span></code></pre>
<p>Now enable factories using <code>HasFactory</code> trait on the <code>User</code> model:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span>
</span><span style="color:#b48ead;">namespace </span><span>App\Models;
</span><span>
</span><span style="color:#b48ead;">use </span><span>Illuminate\Database\Eloquent\Factories\</span><span style="color:#ebcb8b;">HasFactory</span><span>;
</span><span>
</span><span style="color:#b48ead;">class </span><span style="color:#ebcb8b;">User </span><span style="color:#b48ead;">extends </span><span style="color:#a3be8c;">Model </span><span style="color:#eff1f5;">{
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">use </span><span style="color:#a3be8c;">HasFactory</span><span style="color:#eff1f5;">;
</span><span style="color:#eff1f5;">
</span><span style="color:#eff1f5;"> </span><span style="color:#65737e;">// ...
</span><span style="color:#eff1f5;">}
</span></code></pre>
<p>Combination of the above makes the following possible:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span>
</span><span>$</span><span style="color:#bf616a;">user </span><span>= \App\Models\</span><span style="color:#ebcb8b;">User</span><span>::</span><span style="color:#bf616a;">factory</span><span>()-></span><span style="color:#bf616a;">suspended</span><span>()-></span><span style="color:#bf616a;">create</span><span>();
</span></code></pre>
<p>Pretty useful and pretty self-explanatory. For the completeness, the above
creates a suspended user account. Now take a look at the <code>factory()</code>
method:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span>
</span><span style="color:#b48ead;">namespace </span><span>Illuminate\Database\Eloquent\Factories;
</span><span>
</span><span style="color:#b48ead;">trait </span><span>HasFactory
</span><span>{
</span><span> </span><span style="color:#65737e;">/**
</span><span style="color:#65737e;"> * Get a new factory instance for the model.
</span><span style="color:#65737e;"> *
</span><span style="color:#65737e;"> * </span><span style="color:#b48ead;">@param</span><span style="color:#65737e;"> callable|array|int|null $count
</span><span style="color:#65737e;"> * </span><span style="color:#b48ead;">@param</span><span style="color:#65737e;"> callable|array $state
</span><span style="color:#65737e;"> * </span><span style="color:#b48ead;">@return</span><span style="color:#65737e;"> \Illuminate\Database\Eloquent\Factories\Factory<static>
</span><span style="color:#65737e;"> */
</span><span> </span><span style="color:#b48ead;">public static function </span><span style="color:#8fa1b3;">factory</span><span>($</span><span style="color:#bf616a;">count </span><span>= </span><span style="color:#d08770;">null</span><span>, $</span><span style="color:#bf616a;">state </span><span>= [])
</span><span> {
</span><span> $</span><span style="color:#bf616a;">factory </span><span>= </span><span style="color:#bf616a;">static</span><span>::</span><span style="color:#bf616a;">newFactory</span><span>() ?: </span><span style="color:#ebcb8b;">Factory</span><span>::</span><span style="color:#bf616a;">factoryForModel</span><span>(</span><span style="color:#96b5b4;">get_called_class</span><span>());
</span><span>
</span><span> </span><span style="color:#b48ead;">return </span><span>$</span><span style="color:#bf616a;">factory
</span><span> -></span><span style="color:#bf616a;">count</span><span>(</span><span style="color:#96b5b4;">is_numeric</span><span>($</span><span style="color:#bf616a;">count</span><span>) ? $</span><span style="color:#bf616a;">count</span><span> : </span><span style="color:#d08770;">null</span><span>)
</span><span> -></span><span style="color:#bf616a;">state</span><span>(</span><span style="color:#96b5b4;">is_callable</span><span>($</span><span style="color:#bf616a;">count</span><span>) || </span><span style="color:#96b5b4;">is_array</span><span>($</span><span style="color:#bf616a;">count</span><span>) ? $</span><span style="color:#bf616a;">count</span><span> : $</span><span style="color:#bf616a;">state</span><span>);
</span><span> }
</span><span>
</span><span> </span><span style="color:#65737e;">// ...
</span><span>}
</span></code></pre>
<p>The problematic bit is the <code>@return</code> statement which specifically says that
the parent class of <code>Factory</code> is returned, instead of our <code>UserFactory</code>,
which contains the <code>suspended()</code> method. Trying to get IDE hinting for any
such custom methods (states) simply does not work this way, because we need
to somehow tell the language server that calling <code>User::factory()</code> really
returns <code>UserFactory</code> instead of just <code>Factory</code>. One of the ways to do just
that is doing so explicitly:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span style="color:#65737e;">/** </span><span style="color:#b48ead;">@var</span><span style="color:#65737e;"> \Database\Factories\UserFactory $userFactory */
</span><span>$</span><span style="color:#bf616a;">userFactory </span><span>= </span><span style="color:#ebcb8b;">User</span><span>::</span><span style="color:#bf616a;">factory</span><span>();
</span><span>
</span><span>$</span><span style="color:#bf616a;">userFactory</span><span>-></span><span style="color:#bf616a;">suspended</span><span>()-></span><span style="color:#bf616a;">create</span><span>(); </span><span style="color:#65737e;">// now the autocompletion works
</span></code></pre>
<p>Works but it is quite ugly, eh. One downside is that this cannot be chained
easily as one might be used to, because the variable has to be on it's own
line:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span style="color:#65737e;">/** </span><span style="color:#b48ead;">@var</span><span style="color:#65737e;"> \Database\Factories\UserFactory $userFactory */
</span><span>$</span><span style="color:#bf616a;">userFactory </span><span>= </span><span style="color:#ebcb8b;">User</span><span>::</span><span style="color:#bf616a;">factory</span><span>()-></span><span style="color:#bf616a;">suspended</span><span>()-></span><span style="color:#bf616a;">create</span><span>(); </span><span style="color:#65737e;">// autocompletion wont work
</span></code></pre>
<p>Another, much worse downside is that we have to do this everywhere we want
to have the autocomplete and it is simply not worth. If there just was and
easy way to fix this... Wait, there is one:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span>
</span><span style="color:#b48ead;">namespace </span><span>App\Models;
</span><span>
</span><span style="color:#b48ead;">use </span><span>Database\Factories\</span><span style="color:#ebcb8b;">UserFactory</span><span>;
</span><span style="color:#b48ead;">use </span><span>Illuminate\Database\Eloquent\Factories\</span><span style="color:#ebcb8b;">HasFactory</span><span>;
</span><span>
</span><span style="color:#b48ead;">class </span><span style="color:#ebcb8b;">User </span><span style="color:#b48ead;">extends </span><span style="color:#a3be8c;">Model </span><span style="color:#eff1f5;">{
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">use </span><span style="color:#a3be8c;">HasFactory </span><span style="color:#eff1f5;">{
</span><span style="color:#eff1f5;"> factory </span><span style="color:#b48ead;">as </span><span style="color:#eff1f5;">traitFactory;
</span><span style="color:#eff1f5;"> }
</span><span style="color:#eff1f5;">
</span><span style="color:#eff1f5;"> </span><span style="color:#65737e;">/**
</span><span style="color:#65737e;"> * </span><span style="color:#b48ead;">@param</span><span style="color:#65737e;"> callable|array|int|null $count
</span><span style="color:#65737e;"> * </span><span style="color:#b48ead;">@param</span><span style="color:#65737e;"> callable|array $state
</span><span style="color:#65737e;"> * </span><span style="color:#b48ead;">@return</span><span style="color:#65737e;"> UserFactory
</span><span style="color:#65737e;"> */
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">public static function </span><span style="color:#8fa1b3;">factory</span><span style="color:#eff1f5;">(</span><span>$</span><span style="color:#bf616a;">count </span><span>= </span><span style="color:#d08770;">null</span><span style="color:#eff1f5;">, </span><span>$</span><span style="color:#bf616a;">state </span><span>= </span><span style="color:#eff1f5;">[]) {
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">return </span><span style="color:#bf616a;">static</span><span style="color:#eff1f5;">::</span><span style="color:#bf616a;">traitFactory</span><span style="color:#eff1f5;">(</span><span>$</span><span style="color:#bf616a;">count</span><span style="color:#eff1f5;">, </span><span>$</span><span style="color:#bf616a;">state</span><span style="color:#eff1f5;">);
</span><span style="color:#eff1f5;"> }
</span><span style="color:#eff1f5;">
</span><span style="color:#eff1f5;"> </span><span style="color:#65737e;">// ...
</span><span style="color:#eff1f5;">}
</span></code></pre>
<p>Nice and easy! But how it works? There are two factors in play now. First,
we override the <code>factory()</code> method received in the <code>User</code> model from
<code>HasFactory</code> trait and typehint the new return type to
<code>@return UserFactory</code>. But since we want to call the original trait
<code>factory()</code> method inside it, we need to use php trait
<a href="https://www.php.net/manual/en/language.oop5.traits.php#language.oop5.traits.conflict">conflict resolution</a>
and <code>as</code> operator to rename the method locally as <code>traitFactory()</code> like
this:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span>
</span><span style="color:#65737e;">// ...
</span><span>
</span><span style="color:#b48ead;">use </span><span style="color:#ebcb8b;">HasFactory </span><span>{
</span><span> </span><span style="color:#ebcb8b;">factory </span><span style="color:#b48ead;">as </span><span style="color:#ebcb8b;">traitFactory</span><span>;
</span><span>}
</span></code></pre>
<p>Enjoy!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://andy-carter.com/blog/overriding-extending-a-php-trait-method">https://andy-carter.com/blog/overriding-extending-a-php-trait-method</a></li>
<li><a href="https://www.php.net/manual/en/language.oop5.traits.php#language.oop5.traits.conflict">https://www.php.net/manual/en/language.oop5.traits.php#language.oop5.traits.conflict</a></li>
<li><a href="https://laravel.com/docs/10.x/eloquent-factories">https://laravel.com/docs/10.x/eloquent-factories</a></li>
</ul>
3D printed window holder design2023-09-17T00:00:00+00:002023-09-19T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/3d-printed-window-holder-design/<p>In my office, the window holder broke. It was made out of a material, that
visibly degrades under UV light. I did not want to go buy a new one, since
this is a rented office. Instead I did what I usually do, and fixed it in
DIY fashion - by 3D printing it. Having a window holder not only enables
controlled ventilation but also enhances safety and comfort within the
living or working space.</p>
<p><img src="https://peterbabic.dev/blog/3d-printed-window-holder-design/./window_holder.jpg" alt="A new window holder on a left and an old, broken one on the right" /></p>
<p>TinkerCad link: <a href="https://www.tinkercad.com/things/kqsmW4wbPM2">https://www.tinkercad.com/things/kqsmW4wbPM2</a></p>
<p>STL file: <a href="https://peterbabic.dev/blog/3d-printed-window-holder-design/./window_holder.stl">download</a></p>
<p>Enjoy!</p>
PHP xDebug in Docker2023-08-04T00:00:00+00:002023-08-04T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/php-xdebug-in-docker/<p>This is a trashpost mostly used to store all the links I had in my tab.
Also, I might need it again or it could help someone. There might me
incomplete, missing or conflicting information below so take this with a
grain of salt.</p>
<h2 id="dockerfile">Dockerfile</h2>
<p>Whatever you <code>Dockerfile</code> contents are, add these somewhere sensible,
before <code>EXPOSE</code> or <code>COPY</code> usually:</p>
<pre data-lang="Dockerfile" style="background-color:#2b303b;color:#c0c5ce;" class="language-Dockerfile "><code class="language-Dockerfile" data-lang="Dockerfile"><span>RUN apk add --no-cache $PHPIZE_DEPS linux-headers && \
</span><span> pecl install xdebug && docker-php-ext-enable xdebug
</span></code></pre>
<h2 id="docker-compose-yml">docker-compose.yml</h2>
<p>Base for the <code>docker-compose.yml</code> is below. Note the <code>.ini</code> files in
volumes:</p>
<pre data-lang="yml" style="background-color:#2b303b;color:#c0c5ce;" class="language-yml "><code class="language-yml" data-lang="yml"><span style="color:#bf616a;">version</span><span>: "</span><span style="color:#a3be8c;">3.9</span><span>"
</span><span>
</span><span style="color:#bf616a;">services</span><span>:
</span><span> </span><span style="color:#bf616a;">app</span><span>:
</span><span> </span><span style="color:#bf616a;">build</span><span>:
</span><span> </span><span style="color:#bf616a;">context</span><span>: </span><span style="color:#a3be8c;">./
</span><span> </span><span style="color:#bf616a;">dockerfile</span><span>: </span><span style="color:#a3be8c;">Dockerfile
</span><span> </span><span style="color:#bf616a;">image</span><span>: </span><span style="color:#a3be8c;">php-fpm-81
</span><span> </span><span style="color:#bf616a;">container_name</span><span>: </span><span style="color:#a3be8c;">my-app
</span><span> </span><span style="color:#bf616a;">restart</span><span>: </span><span style="color:#a3be8c;">unless-stopped
</span><span> </span><span style="color:#bf616a;">tty</span><span>: </span><span style="color:#d08770;">true
</span><span> </span><span style="color:#bf616a;">working_dir</span><span>: </span><span style="color:#a3be8c;">/var/www
</span><span> </span><span style="color:#bf616a;">volumes</span><span>:
</span><span> - </span><span style="color:#a3be8c;">./:/var/www
</span><span> - </span><span style="color:#a3be8c;">./docker/php/conf.d/xdebug.ini:/usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini
</span><span> - </span><span style="color:#a3be8c;">./docker/php/conf.d/error_reporting.ini:/usr/local/etc/php/conf.d/error_reporting.ini
</span></code></pre>
<h2 id="error-reporting-ini">error_reporting.ini</h2>
<p>The contents of the <code>error_reporting.ini</code> are simple, and in fact could be
omitted, but we wanna debug:</p>
<pre data-lang="ini" style="background-color:#2b303b;color:#c0c5ce;" class="language-ini "><code class="language-ini" data-lang="ini"><span style="color:#bf616a;">error_reporting</span><span>=E_ALL
</span></code></pre>
<h2 id="xdebug-ini">xdebug.ini</h2>
<p>The contents of <code>docker-php-ext-xdebug.ini</code> are the most important:</p>
<pre data-lang="ini" style="background-color:#2b303b;color:#c0c5ce;" class="language-ini "><code class="language-ini" data-lang="ini"><span style="color:#bf616a;">zend_extension</span><span>=xdebug
</span><span>
</span><span style="color:#b48ead;">[xdebug]
</span><span style="color:#bf616a;">xdebug</span><span>.client_host=host.docker.internal
</span><span style="color:#bf616a;">xdebug</span><span>.client_port=</span><span style="color:#d08770;">9003
</span><span style="color:#bf616a;">xdebug</span><span>.discover_client_host=</span><span style="color:#d08770;">true
</span><span style="color:#bf616a;">xdebug</span><span>.idekey=VSCODE
</span><span style="color:#bf616a;">xdebug</span><span>.mode=develop,debug
</span><span style="color:#bf616a;">xdebug</span><span>.start_with_request=</span><span style="color:#d08770;">yes
</span></code></pre>
<h2 id="launch-json">launch.json</h2>
<p>Even though I use <code>neovim</code> everything, for debugging I did not have time to
setup it yet. Using off-the-shelf vscode for an occasional debug is
currently enough:</p>
<pre data-lang="json" style="background-color:#2b303b;color:#c0c5ce;" class="language-json "><code class="language-json" data-lang="json"><span>{
</span><span> "</span><span style="color:#a3be8c;">version</span><span>": "</span><span style="color:#a3be8c;">0.2.0</span><span>",
</span><span> "</span><span style="color:#a3be8c;">configurations</span><span>": [
</span><span> {
</span><span> "</span><span style="color:#a3be8c;">name</span><span>": "</span><span style="color:#a3be8c;">Listen for Xdebug</span><span>",
</span><span> "</span><span style="color:#a3be8c;">type</span><span>": "</span><span style="color:#a3be8c;">php</span><span>",
</span><span> "</span><span style="color:#a3be8c;">request</span><span>": "</span><span style="color:#a3be8c;">launch</span><span>",
</span><span> "</span><span style="color:#a3be8c;">port</span><span>": </span><span style="color:#d08770;">9003</span><span>,
</span><span> "</span><span style="color:#a3be8c;">pathMappings</span><span>": {
</span><span> "</span><span style="color:#a3be8c;">/var/www/</span><span>": "</span><span style="color:#a3be8c;">${workspaceFolder}</span><span>"
</span><span> }
</span><span> }
</span><span> ]
</span><span>}
</span></code></pre>
<p>Should be enough for debugging via <code>docker-compose</code> and vscode. Enjoy!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://code.visualstudio.com/api/extension-guides/debugger-extension">https://code.visualstudio.com/api/extension-guides/debugger-extension</a></li>
<li><a href="https://dev.to/jackmiras/xdebug-in-vscode-with-docker-379l">https://dev.to/jackmiras/xdebug-in-vscode-with-docker-379l</a></li>
<li><a href="https://dev.to/oranges13/phpstorm-xdebug-alpine-on-docker-13ff">https://dev.to/oranges13/phpstorm-xdebug-alpine-on-docker-13ff</a></li>
<li><a href="https://matthewsetter.com/setup-step-debugging-php-xdebug3-docker/">https://matthewsetter.com/setup-step-debugging-php-xdebug3-docker/</a></li>
<li><a href="https://php.tutorials24x7.com/blog/how-to-debug-php-using-xdebug-visual-studio-code-and-docker-on-ubuntu">https://php.tutorials24x7.com/blog/how-to-debug-php-using-xdebug-visual-studio-code-and-docker-on-ubuntu</a></li>
<li><a href="https://stackoverflow.com/questions/46825502/how-do-i-install-xdebug-on-dockers-official-php-fpm-alpine-image">https://stackoverflow.com/questions/46825502/how-do-i-install-xdebug-on-dockers-official-php-fpm-alpine-image</a></li>
<li><a href="https://torbjornzetterlund.com/xdebug-a-php-docker-container-in-vs-code/#gsc.tab=0">https://torbjornzetterlund.com/xdebug-a-php-docker-container-in-vs-code/#gsc.tab=0</a></li>
<li><a href="https://www.appsloveworld.com/docker/100/163/how-to-add-xdebug-to-php8-1-fpm-alpine-docker-container">https://www.appsloveworld.com/docker/100/163/how-to-add-xdebug-to-php8-1-fpm-alpine-docker-container</a></li>
</ul>
Updating UEFI BIOS via fwupd on ThinkPad T14 Gen32023-08-01T00:00:00+00:002023-08-01T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/updating-uefi-bios-via-fwupd-on-thinkpad-t14-gen3/<p>I had a lot of trouble understanding how to do firmware updates on my new
ThinkPad T14 Gen3 AMD which now serves as a replacement for my trusty T470.
Using <code>fwupdmgr</code> appears to be
<a href="https://wiki.archlinux.org/title/Lenovo_ThinkPad_T14_(AMD)_Gen_3#fwupd">confirmed</a>,
even for UEFI BIOS. But getting it to work was another thing. I encountered
three pain points. The <code>fwupdmgr --version</code> I used:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>compile org.freedesktop.fwupd 1.9.3
</span><span>compile com.hughsie.libxmlb 0.3.11
</span><span>compile com.hughsie.libjcat 0.1.14
</span><span>runtime org.freedesktop.fwupd-efi 1.4
</span><span>compile org.freedesktop.gusb 0.4.6
</span><span>runtime com.hughsie.libjcat 0.1.14
</span><span>runtime com.dell.libsmbios 2.4
</span><span>runtime org.freedesktop.gusb 0.4.6
</span><span>runtime org.freedesktop.fwupd 1.9.3
</span><span>runtime org.kernel 6.4.4-arch1-1
</span></code></pre>
<h2 id="prerequisite">Prerequisite</h2>
<p>If for any obscure reasons you run your ThinkPad T14 Gen3 in Legacy BIOS
mode (if that is even possible), you encounter the
<code>WARNING: Firmware can not be updated in legacy BIOS mode</code> error and
updating UEFI BIOS via <code>fwupdmgr</code> is
<a href="https://github.com/fwupd/fwupd/wiki/PluginFlag:legacy-bios">not supported</a>.</p>
<p>Also, some users
<a href="https://github.com/fwupd/fwupd/issues/5748#issuecomment-1593106943">report</a>
the GPT layout is
<a href="https://github.com/fwupd/fwupd/issues/5612#issuecomment-1472197509">required</a>,
but I did not find hard evidence and did not test. In any case you
<a href="https://github.com/fwupd/fwupd/issues/6035#issuecomment-1660354900">run MBR layout</a>
and updating works, let me know.</p>
<h2 id="bios-settings">BIOS settings</h2>
<p>To make sure <code>fwupdmgr</code> sees UEFI BIOS option in the first place, few BIOS
settings that interfere with the process must be
<a href="https://github.com/fwupd/firmware-lenovo/issues/252#issuecomment-1205433426">set up properly</a>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> fwupdmgr get-bios-setting BIOSUpdateByEndUsers WindowsUEFIFirmwareUpdate BootOrderLock
</span></code></pre>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>WindowsUEFIFirmwareUpdate:
</span><span> Setting type: Enumeration
</span><span> Current Value: Enable
</span><span> Description: BIOS updates delivered via LVFS or Windows Update
</span><span> Read Only: False
</span><span> Possible Values:
</span><span> 0: Disable
</span><span> 1: Enable
</span><span>
</span><span>BootOrderLock:
</span><span> Setting type: Enumeration
</span><span> Current Value: Disable
</span><span> Description: BootOrderLock
</span><span> Read Only: False
</span><span> Possible Values:
</span><span> 0: Disable
</span><span> 1: Enable
</span><span>
</span><span>BIOSUpdateByEndUsers:
</span><span> Setting type: Enumeration
</span><span> Current Value: Enable
</span><span> Description: BIOSUpdateByEndUsers
</span><span> Read Only: False
</span><span> Possible Values:
</span><span> 0: Disable
</span><span> 1: Enable
</span></code></pre>
<p>Make sure these three are correct value and
<a href="https://github.com/fwupd/fwupd/blob/main/docs/bios-settings.md#setting-bios-settings">update them manually</a>
or via <code>fwupdmgr set-bios-setting</code>. Otherwise, the problem manifests
differently based on the combinations of the three settings. One of the
outputs could lead to an error <code>No supported devices found</code> or
<code>No updatable devices</code> which has some light shed onto in
<a href="https://github.com/fwupd/firmware-lenovo/issues/20#issuecomment-538004411">this comment</a>.</p>
<h2 id="capsules-not-found">Capsules not found</h2>
<p>Okay, next step. The UEFI BIOS is finally shown under <code>System Firmware</code>
branch:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> fwupdmgr get-devices</span><span style="color:#bf616a;"> --show-all-devices
</span></code></pre>
<p>Trying to update it appears to be working in the console:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>Perform operation? [Y|n]:
</span><span>Updating System Firmware…[ - ]
</span><span>Waiting… [***************************************]
</span><span>Successfully installed firmware
</span><span>Do not turn off your computer or remove the AC adapter while the update is in progress.
</span><span>Do not turn off your computer or remove the AC adapter while the update is in progress.
</span><span>An update requires a reboot to complete. Restart now? [y|N]: y
</span></code></pre>
<p>However, I got this error after rebooting (when the actual firmware
flashing <em>should</em> be happening):</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>fwupd-efi version 1.4
</span><span>WARNING: QueryCapsuleCapabilities failed, assuming EfiResetWarm: Unsupported
</span><span>WARNING: Could not apply capsule update: Not Found
</span><span>WARNING: Could not apply capsules: Not Found
</span><span>Reset System
</span></code></pre>
<p>I tried updating a few times, always getting the same result. No update,
reboot. Fortunately, this problem is especially
<a href="https://github.com/fwupd/fwupd/issues/5748">well documented</a>.</p>
<h2 id="esp-partition-flag">ESP partition flag</h2>
<p>After paying better attention to the <code>fwupdmgr</code> commands output, I noticed
the little obscure messages like
<code>WARNING: UEFI ESP partition not detected or configured</code> or
<code>WARNING: UEFI ESP partition may not be set up correctly</code> followed by
<code>See https://github.com/fwupd/fwupd/wiki/PluginFlag:esp-not-valid for more information.</code>
The link above however does not show anything apart from Header. Sigh.</p>
<p>Solutions related to this problem could be tracked to the
<a href="https://github.com/fwupd/fwupd/wiki/LVFS-Triaged-Issue:-Invalid-ESP-Partition">wiki</a>.
I simply used GParted to set the <code>esp</code> flag on the boot partition, but I
will
<a href="https://github.com/fwupd/fwupd/issues/5748#issuecomment-1593106943">reiterate</a>
also the command from the link above, for a record:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">parted</span><span> /dev/nvme0nXXX set 1 esp on
</span></code></pre>
<p>Did the trick. Now the UEFi BIOS update via <code>fwupdmgr</code> really works. No
need to fiddle with Windows to get any drivers up-to-date. Enjoy!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://github.com/fwupd/firmware-lenovo/issues/20#issuecomment-538004411">https://github.com/fwupd/firmware-lenovo/issues/20#issuecomment-538004411</a></li>
<li><a href="https://github.com/fwupd/fwupd/discussions/2637">https://github.com/fwupd/fwupd/discussions/2637</a></li>
<li><a href="https://github.com/fwupd/fwupd/issues/1220">https://github.com/fwupd/fwupd/issues/1220</a></li>
<li><a href="https://github.com/fwupd/fwupd/issues/2198">https://github.com/fwupd/fwupd/issues/2198</a></li>
<li><a href="https://github.com/fwupd/fwupd/issues/3238">https://github.com/fwupd/fwupd/issues/3238</a></li>
<li><a href="https://github.com/fwupd/fwupd/issues/4631">https://github.com/fwupd/fwupd/issues/4631</a></li>
<li><a href="https://github.com/fwupd/fwupd/issues/6012">https://github.com/fwupd/fwupd/issues/6012</a></li>
<li><a href="https://github.com/fwupd/fwupd/wiki/PluginFlag:capsules-unsupported">https://github.com/fwupd/fwupd/wiki/PluginFlag:capsules-unsupported</a></li>
<li><a href="https://github.com/fwupd/fwupd/wiki/PluginFlag:esp-not-valid">https://github.com/fwupd/fwupd/wiki/PluginFlag:esp-not-valid</a></li>
</ul>
Postman urlencode multiple env variables2023-03-27T00:00:00+00:002023-03-27T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/postman-urlencode-multiple-env-variables/<p>As an API development tool, Postman is widely used by developers for
testing, documenting, and sharing APIs. One of the most powerful features
of Postman is the ability to execute code before a request is sent, which
is known as a pre-request script. This functionality allows developers to
manipulate request parameters, add dynamic variables, and perform various
computations or validations before the actual API request is made.</p>
<h2 id="environmental-variables">Environmental variables</h2>
<p>In addition to pre-request scripts, Postman also offers a powerful feature
called environmental variables. Environmental variables allow developers to
store and reuse values across multiple requests and collections. This
functionality can be especially useful when dealing with complex APIs that
require authentication tokens, API keys, or other dynamic variables that
change over time.</p>
<p>To use environmental variables in Postman, developers can define a set of
variables in an environment file. An environment file is simply a
collection of key-value pairs that can be accessed by all requests within a
collection. For example, an environment file for a testing environment
might include variables like <code>base_url</code> and <code>api_key</code>. These variables can
then be accessed by individual requests by using double curly braces and
the variable name like <code>{{base_url}}/users</code>.</p>
<h2 id="url-parameters-using-addqueryparams">URL parameters using addQueryParams</h2>
<p>In Postman, the <code>addQueryParams</code> function is a JavaScript function that
allows developers to easily add query parameters to a URL. Query parameters
are often used in APIs to filter or paginate results, and can be added to
the URL as key-value pairs. Example:</p>
<pre data-lang="javascript" style="background-color:#2b303b;color:#c0c5ce;" class="language-javascript "><code class="language-javascript" data-lang="javascript"><span style="color:#bf616a;">pm</span><span>.</span><span style="color:#bf616a;">request</span><span>.</span><span style="color:#bf616a;">url</span><span>.</span><span style="color:#8fa1b3;">addQueryParams</span><span>({ page: </span><span style="color:#d08770;">1 </span><span>})
</span></code></pre>
<p>Using use the <code>addQueryParams()</code> function in Postman, developers can create
a pre-request script and use the <code>pm.request.url</code> object to manipulate the
URL before sending the request.</p>
<h2 id="encoding-uri-components">Encoding URI components</h2>
<p>In JavaScript, the <code>encodeURIComponent()</code> function is used to encode
special characters in a URL. When a URL contains special characters like
spaces, ampersands, or slashes, they can cause issues when passed as a
parameter in a request. The <code>encodeURIComponent()</code> function ensures that
these special characters are properly encoded, so they can be safely used
in a URL.</p>
<p>To use the <code>encodeURIComponent()</code> function in JavaScript, simply pass the
string that needs to be encoded as the function's argument. For example,
the following code encodes the string <code>hello world</code>:</p>
<pre data-lang="javascript" style="background-color:#2b303b;color:#c0c5ce;" class="language-javascript "><code class="language-javascript" data-lang="javascript"><span style="color:#b48ead;">let </span><span style="color:#bf616a;">encodedString </span><span>= </span><span style="color:#96b5b4;">encodeURIComponent</span><span>("</span><span style="color:#a3be8c;">hello world</span><span>")
</span></code></pre>
<p>The resulting encoded string would be "hello%20world". Notice how the space
character is replaced with "%20", which is the encoded representation of a
space in a URL.</p>
<p>The <code>encodeURIComponent()</code> function can also be used in conjunction with
the <code>addQueryParams()</code> function in Postman's pre-request scripts, to
properly encode query parameter values. For example, if a query parameter
value contains a special character, like a space or an ampersand, it needs
to be properly encoded before it can be added to the URL. The following
script demonstrates how to properly encode a query parameter value before
adding it to the URL:</p>
<pre data-lang="javascript" style="background-color:#2b303b;color:#c0c5ce;" class="language-javascript "><code class="language-javascript" data-lang="javascript"><span style="color:#b48ead;">let </span><span style="color:#bf616a;">queryParamValue </span><span>= "</span><span style="color:#a3be8c;">hello world & goodbye</span><span>"
</span><span style="color:#b48ead;">let </span><span style="color:#bf616a;">encodedQueryParamValue </span><span>= </span><span style="color:#96b5b4;">encodeURIComponent</span><span>(</span><span style="color:#bf616a;">queryParamValue</span><span>)
</span><span style="color:#bf616a;">pm</span><span>.</span><span style="color:#bf616a;">request</span><span>.</span><span style="color:#bf616a;">url</span><span>.</span><span style="color:#8fa1b3;">addQueryParams</span><span>({ param: </span><span style="color:#bf616a;">encodedQueryParamValue </span><span>})
</span></code></pre>
<p>In this example, the <code>encodeURIComponent()</code> function is used to encode the
value of the <code>queryParamValue</code> variable before it is added as a query
parameter to the URL.</p>
<p>Overall, the <code>encodeURIComponent()</code> function is a simple yet powerful
function in JavaScript that can be used to properly encode special
characters in a URL. When used in conjunction with Postman's pre-request
scripts, it can help ensure that APIs are properly tested and function as
expected, even when dealing with special characters in query parameters or
URL segments.</p>
<h2 id="dealing-with-json-objects">Dealing with JSON objects</h2>
<p>In JavaScript, the <code>JSON.stringify()</code> function is used to convert a
JavaScript object into a JSON string. JSON (short for JavaScript Object
Notation) is a lightweight data-interchange format that is commonly used in
web applications for transmitting data between the client and the server.</p>
<p>To use the <code>JSON.stringify()</code> function, simply pass the object that needs
to be converted as the function's argument. For example, the following code
converts a JavaScript object into a JSON string:</p>
<pre data-lang="javascript" style="background-color:#2b303b;color:#c0c5ce;" class="language-javascript "><code class="language-javascript" data-lang="javascript"><span style="color:#b48ead;">let </span><span style="color:#bf616a;">obj </span><span>= { name: "</span><span style="color:#a3be8c;">John</span><span>", age: </span><span style="color:#d08770;">30 </span><span>}
</span><span style="color:#b48ead;">let </span><span style="color:#bf616a;">jsonString </span><span>= JSON.</span><span style="color:#96b5b4;">stringify</span><span>(</span><span style="color:#bf616a;">obj</span><span>)
</span><span style="color:#ebcb8b;">console</span><span>.</span><span style="color:#96b5b4;">log</span><span>(</span><span style="color:#bf616a;">jsonString</span><span>)
</span></code></pre>
<p>The resulting JSON string would be <code>{"name": "John", "age": 30}</code>. Notice
how the object properties are converted into a string with the key-value
pairs separated by colons, and the pairs separated by commas. The resulting
JSON string can be easily transmitted over a network and then parsed back
into a JavaScript object on the receiving end.</p>
<p>The <code>JSON.stringify()</code> function can also be useful in Postman's pre-request
scripts, where it can be used to convert an object into a string before it
is sent in a request body. For example, the following script demonstrates
how to use the <code>JSON.stringify()</code> function to convert an object into a
string before sending it in a request:</p>
<pre data-lang="javascript" style="background-color:#2b303b;color:#c0c5ce;" class="language-javascript "><code class="language-javascript" data-lang="javascript"><span style="color:#b48ead;">const </span><span style="color:#bf616a;">key </span><span>= </span><span style="color:#bf616a;">pm</span><span>.</span><span style="color:#bf616a;">environment</span><span>.</span><span style="color:#96b5b4;">get</span><span>("</span><span style="color:#a3be8c;">API_KEY</span><span>")
</span><span style="color:#b48ead;">const </span><span style="color:#bf616a;">email </span><span>= </span><span style="color:#bf616a;">pm</span><span>.</span><span style="color:#bf616a;">environment</span><span>.</span><span style="color:#96b5b4;">get</span><span>("</span><span style="color:#a3be8c;">EMAIL</span><span>")
</span><span style="color:#b48ead;">const </span><span style="color:#bf616a;">init </span><span>= JSON.</span><span style="color:#96b5b4;">stringify</span><span>({ </span><span style="color:#bf616a;">key</span><span>, </span><span style="color:#bf616a;">email </span><span>})
</span><span style="color:#bf616a;">pm</span><span>.</span><span style="color:#bf616a;">request</span><span>.</span><span style="color:#8fa1b3;">addQueryParams</span><span>("</span><span style="color:#a3be8c;">data=</span><span>" + </span><span style="color:#96b5b4;">encodeURIComponent</span><span>(</span><span style="color:#bf616a;">init</span><span>))
</span></code></pre>
<p>That's it. Enjoy!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://learning.postman.com/docs/sending-requests/requests/#url-encoded">https://learning.postman.com/docs/sending-requests/requests/#url-encoded</a></li>
<li><a href="https://community.postman.com/t/modify-query-param-in-pre-request-script/8880/5">https://community.postman.com/t/modify-query-param-in-pre-request-script/8880/5</a></li>
<li><a href="https://stackoverflow.com/a/43611293/1972509">https://stackoverflow.com/a/43611293/1972509</a></li>
<li><a href="https://stackoverflow.com/a/33614377/1972509">https://stackoverflow.com/a/33614377/1972509</a></li>
</ul>
Exclude middleware for Laravel routes2023-03-26T00:00:00+00:002023-03-26T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/exclude-middleware-for-laravel-routes/<p>In the world of web development, Laravel is a popular PHP framework that
provides a wide range of features and tools to build web applications
quickly and efficiently. One of its most useful features is middleware,
which allows developers to filter HTTP requests entering their application.</p>
<p>However, while Laravel middleware is a powerful tool, there are times when
it is important to exclude it from certain routes or controllers. In this
blog post, we'll explore why excluding Laravel middleware can be essential
for maintaining the security and performance of your web application.</p>
<h2 id="motivation">Motivation</h2>
<p>Three main reasons why excluding middleware on routes can be necessary:</p>
<ol>
<li>Performance Optimization: Some middleware functions can be
resource-intensive, especially when dealing with large amounts of data
or complex logic. By excluding middleware on routes where it's not
needed, you can significantly improve the performance of your
application.</li>
<li>Security Requirements: Depending on your application's security
requirements, you may need to exclude middleware on certain routes to
prevent unauthorized access or protect sensitive data. For example, you
may want to exclude middleware that logs user activity on routes that
handle sensitive user information.</li>
<li>Customization Needs: Sometimes, you may need to customize the behavior
of certain routes or controllers in a way that conflicts with the
functionality of certain middleware. In these cases, excluding
middleware on those specific routes or controllers can be necessary to
achieve the desired behavior.</li>
</ol>
<h2 id="approaches">Approaches</h2>
<p>There is this way to remove attached route middleware (not global
middleware) from a particular route, taken from the
<a href="https://laravel.com/docs/10.x/middleware#excluding-middleware">docs</a>:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span>Route::get('/profile', function () {
</span><span> // ...
</span><span>})->withoutMiddleware([MyCustomMiddleware::class]);
</span></code></pre>
<p>There's also this way which is not documented, but appears to work:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span>Route::group(['prefix' => 'prefix', 'excluded_middleware' => ['api']], function () {
</span><span> Route::get('/profile', function () {
</span><span> // ...
</span><span> });
</span><span>});
</span></code></pre>
<p>On previous versions of Laravel, route middleware was displayed
automatically using:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">php</span><span> artisan route:list
</span></code></pre>
<p>Currently, at least on <code>laravel/framework</code> version <code>v9.52.4</code> the middleware
is hidden by default. It is possible to display it using <code>verbose</code> or for
short <code>-v</code> option:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">php</span><span> artisan route:list</span><span style="color:#bf616a;"> -v
</span></code></pre>
<p>In case you do not know it, the <code>--path</code> option is quite handy, combined
with the above:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">php</span><span> artisan route:list</span><span style="color:#bf616a;"> --path</span><span> user</span><span style="color:#bf616a;"> -v
</span></code></pre>
<p>Enjoy!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://stackoverflow.com/a/63367764/1972509">https://stackoverflow.com/a/63367764/1972509</a></li>
<li><a href="https://laravel.com/docs/10.x/middleware">https://laravel.com/docs/10.x/middleware</a></li>
<li><a href="https://github.com/laravel/framework/issues/33041">https://github.com/laravel/framework/issues/33041</a></li>
<li><a href="https://github.com/laravel/framework/pull/32993">https://github.com/laravel/framework/pull/32993</a></li>
<li><a href="https://laracasts.com/discuss/channels/laravel/laravel-route-list-php-artisan-routelist-displaying-middleware-on-new-lines?page=1&replyId=614735">https://laracasts.com/discuss/channels/laravel/laravel-route-list-php-artisan-routelist-displaying-middleware-on-new-lines?page=1&replyId=614735</a></li>
</ul>
How to update Laravel version with Composer2023-03-12T00:00:00+00:002023-03-12T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/how-to-update-laravel-version-with-composer/<p>When updating Laravel to a major version, it might be straightforward but
more often than not, it is not. I personally always struggle with to get it
done quickly and usually spend far more time on the task, than expected.
This is especially true in larger projects where there might me multiple
dependencies, as each other of them can prolong the upgrade process.</p>
<p>Most of the time the error message boils down to something similar to the
following excerpt:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>Running composer update laravel/framework --with-all-dependencies
</span><span>Loading composer repositories with package information
</span><span>Updating dependencies
</span><span>Your requirements could not be resolved to an installable set of packages.
</span><span>
</span><span> Problem 1
</span><span> - Root composer.json requires laravel/framework *, found laravel/framework[x.x.x] but these were not loaded, likely because it conflicts with another require.
</span></code></pre>
<p>The error is not very verbose, and in Composer before version 2.0, it was
probably even less so.</p>
<h2 id="dealing-with-dependencies">Dealing with dependencies</h2>
<p>The above error is usually longer and contains some hints about the
conflicting package, but usually somewhere deeper. Pay attention to its
entire output. Most probably, the currently locked version of some package
is not available for the Laravel version you are trying to install.</p>
<p>To find out what version is actually installed, what I usually do is:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">composer</span><span> show | </span><span style="color:#bf616a;">grep </span><span><package>
</span></code></pre>
<p>What I do next is I visit <a href="https://packagist.org/">https://packagist.org/</a> and look for the lowest
version of the conflicting package that supports what I am trying to
install. Let's illustrate the above with the
<a href="https://packagist.org/packages/spatie/laravel-fractal">laravel-fractal</a>
package. Example for the current installed version:</p>
<p><img src="https://peterbabic.dev/blog/how-to-update-laravel-version-with-composer/requires-1.png" alt="Version 5.8.1 of laravel-fractal supports lower version of illuminate" /></p>
<p>The next major version correctly supports our target:</p>
<p><img src="https://peterbabic.dev/blog/how-to-update-laravel-version-with-composer/requires-2.png" alt="Version 6.0.0 of laravel-fractal supports higher version of illuminate" /></p>
<p>Why lowest, you might ask? Because sometimes, in larger projects, things
are not as bleeding edge as they could be, and in the meantime, the newest
release version of your conflicting package might be also out of bounds,
but from the another side of the spectrum.</p>
<blockquote>
<p><strong>Note:</strong> Please check also <code>php</code> version required by the package, as
this is also a common source of package upgrade conflicts, reported by
composer.</p>
</blockquote>
<h2 id="no-package-version-supports-my-target">No package version supports my target?</h2>
<p>But what if no package version shown on packagist has constrains that fits
my target requirements? Well, that happens too. In that case the update is
not that straightforward. What I usually do is to replace the remote
package with the local, forked one, where I manually update nothing else
but requirements in composer of said package.</p>
<p>This is however usually a major pain and outside of the scope of this
article, but will keep you going if done right. The next useful thing I do
is to actually open a Pull Request upstream with just the single change in
the <code>composer.json</code> I made. Keep in mind it is well worth to include
updates to <code>README.md</code> in that Pull Request too. If nothing else needs
changing, the maintainer might accept the request so soon that you might
not even need to force composer to accept the local package in the first
place (depending on how time constrained you are).</p>
<h2 id="how-to-upgrade">How to upgrade</h2>
<p>Once I know the target version I want the package to be at, the next major
problem is that usually multiple packages need to be upgraded <em>at once</em>.
They simply do not really want to be upgraded one-by-one, due to tight
constrains, partly illustrated in the screenshots above.</p>
<p>What I usually do is to set target versions into composer for each
individual package first without updating like this:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">composer</span><span> require</span><span style="color:#bf616a;"> --no-update</span><span> spatie/laravel-fractal "</span><span style="color:#a3be8c;">^6</span><span>"
</span><span style="color:#bf616a;">composer</span><span> require</span><span style="color:#bf616a;"> --no-update</span><span> laravel/framework "</span><span style="color:#a3be8c;">^9</span><span>"
</span></code></pre>
<blockquote>
<p><strong>Note:</strong> the double-quotes around version number are not even needed on
<code>bash</code>, however, depending on configuration of your shell, they might be
necessary. For instance, <code>zsh</code> can be configured via
<code>setopt EXTENDED_GLOB</code> to treat the caret <code>^</code> character differently.</p>
</blockquote>
<p>Then I try to update the update the whole thing:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">composer</span><span> update</span><span style="color:#bf616a;"> --with-all-dependencies</span><span> spatie/laravel-fractal laravel/framework
</span></code></pre>
<p>Such command might pass or might output conflicts, but this this way it is
easy to iterate in small steps, methodically adjusting dependencies to
match the desired output.</p>
<h2 id="what-about-dev-dependencies">What about dev dependencies?</h2>
<p>Sometimes, during such a major upgrade, I need to adjust the locked dev
dependencies versions as well. Dev dependencies are located under
<code>require-dev</code> in <code>composer.json</code>. This might prove a little bit tricky. For
example:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>phpunit/phpunit is currently present in the require-dev key and you ran the command without the --dev flag, which will move it to the require key.
</span><span>Do you want to move this requirement? [no]? yes
</span><span>./composer.json has been updated
</span></code></pre>
<p>As you see, I typed <code>yes</code> into prompt to confirm. It is easier to move some
of the dev dependencies into normal dependencies, for the sake of the
upgrade. Now the project could be upgraded like this:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">composer</span><span> update</span><span style="color:#bf616a;"> --with-all-dependencies</span><span> spatie/laravel-fractal laravel/framework phpunit/phpunit
</span></code></pre>
<p>After the upgrade, I can move all the dev dependencies back like this:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">composer</span><span> reqiure</span><span style="color:#bf616a;"> --dev</span><span> phpunit/phpunit
</span></code></pre>
<p>Confirm by typing <code>yes</code> again.</p>
<blockquote>
<p><strong>Note:</strong> One should avoid updating <code>composer.json</code> manually.</p>
</blockquote>
<p>The reason for this back-and-forth package shuffling is that Composer seems
to either update dependencies or the dev dependencies, not both. Enjoy!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://github.com/composer/composer/issues/9393">https://github.com/composer/composer/issues/9393</a></li>
<li><a href="https://getcomposer.org/doc/01-basic-usage.md">https://getcomposer.org/doc/01-basic-usage.md</a></li>
</ul>
High CPU usage with Yubikey and pcscid2022-12-18T00:00:00+00:002022-12-18T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/high-cpu-usage-with-pcscid-and-yubikey/<p>There a is a issue I am having for quite a long time I hadn't been able to
quickly solve, so I just sucked it up. The issue is, that if I plug off the
Yubikey off the laptop's USB port, one CPU goes haywire 100% usage and
some, but not all services or applications that require internet access
cannot reach it.</p>
<p>The issue resolves itself when the Yubikey is plugged back into the USB
port or the <code>pcscid.service</code> is restarted:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> systemctl restart pcscid.service
</span></code></pre>
<p>There is nothing relevant I could find in <code>dmesg</code> output, probing journal:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">journalctl -xeu</span><span> pcscid.service
</span></code></pre>
<p>There is <em>something</em>, but with my current understanding this is not of much
help, still keeping it here for searching:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>Dec 17 10:42:42 peterbabic pcscd[600525]: 99999999 ccid_usb.c:899:WriteUSB() write failed (1/45): LIBUSB_ERROR_PIPE
</span><span>Dec 17 10:42:42 peterbabic pcscd[600525]: 00000037 ifdwrapper.c:364:IFDStatusICC() Card not transacted: 612
</span><span>Dec 17 10:42:42 peterbabic pcscd[600525]: 00000010 eventhandler.c:336:EHStatusHandlerThread() Error communicating to: Yubico YubiKey OTP+FIDO+CCID 00 00
</span><span>Dec 17 10:42:42 peterbabic pcscd[600525]: 00479093 ccid_usb.c:899:WriteUSB() write failed (1/45): LIBUSB_ERROR_NO_DEVICE
</span><span>Dec 17 10:42:42 peterbabic pcscd[600525]: 00000062 ccid_usb.c:1501:InterruptRead() libusb_submit_transfer failed: LIBUSB_ERROR_NO_DEVICE
</span></code></pre>
<p>The most common recommended solution appears to be:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#96b5b4;">echo </span><span>"</span><span style="color:#a3be8c;">disable-ccid</span><span>" >> </span><span style="color:#bf616a;">~</span><span>/.gnupg/scdaemon.conf
</span></code></pre>
<p>I was quite surprised I did not have it there, because last December I
personally
<a href="/blog/openpgp-smartcard-kdf-issue-bad-pin/#running-gnupg-2-3-1-on-arch">wrote a post</a>
suggesting to insert it there.</p>
<p>So I (re-)inserted the config option into the file and now I wait and
observe what will be going on. Adding a list of open links to the bottom
because I feel I will need to get back to this in the future.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://ludovicrousseau.blogspot.com/2019/06/gnupg-and-pcsc-conflicts.html">https://ludovicrousseau.blogspot.com/2019/06/gnupg-and-pcsc-conflicts.html</a></li>
<li><a href="https://mysiar.github.io/devops/2020/08/27/yubico-gpg-trouble.html">https://mysiar.github.io/devops/2020/08/27/yubico-gpg-trouble.html</a></li>
<li><a href="https://gist.github.com/artizirk/d09ce3570021b0f65469cb450bee5e29">https://gist.github.com/artizirk/d09ce3570021b0f65469cb450bee5e29</a></li>
<li><a href="https://forum.yubico.com/viewtopic8599.html?p=8405">https://forum.yubico.com/viewtopic8599.html?p=8405</a></li>
<li><a href="https://support.yubico.com/hc/en-us/articles/360013714479-Troubleshooting-Issues-with-GPG">https://support.yubico.com/hc/en-us/articles/360013714479-Troubleshooting-Issues-with-GPG</a></li>
<li><a href="https://bbs.archlinux.org/viewtopic.php?id=271457">https://bbs.archlinux.org/viewtopic.php?id=271457</a></li>
<li><a href="https://github.com/Yubico/yubioath-flutter/issues/78#issuecomment-238564528">https://github.com/Yubico/yubioath-flutter/issues/78#issuecomment-238564528</a></li>
<li><a href="https://github.com/LudovicRousseau/PCSC/issues/65">https://github.com/LudovicRousseau/PCSC/issues/65</a></li>
<li><a href="https://bbs.archlinux.org/viewtopic.php?id=244769">https://bbs.archlinux.org/viewtopic.php?id=244769</a></li>
<li><a href="https://ask.fedoraproject.org/t/pcscd-has-to-be-restarted-at-every-boot-to-get-my-ssh-keys-from-my-yubikey/24571">https://ask.fedoraproject.org/t/pcscd-has-to-be-restarted-at-every-boot-to-get-my-ssh-keys-from-my-yubikey/24571</a></li>
<li><a href="https://github.com/FiloSottile/yubikey-agent/issues/81">https://github.com/FiloSottile/yubikey-agent/issues/81</a></li>
</ul>
JSON formatting in DBeaver2022-11-28T00:00:00+00:002022-11-28T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/json-formatting-in-dbeaver/<p>This feature really helped me save some time almost daily and it was hiding
in plain sight. DBeaver, the open-source database management software, can
format JSON columns for you. Before I found this feature, I had to copy
paste the column into editor to make sense of the nested JSON structures,
because formatted as a string, is is very hard to understand what the
contents are about. More so, when you want to edit something.</p>
<p><img src="https://peterbabic.dev/blog/json-formatting-in-dbeaver/./dbeaver-json.png" alt="screenshot of JSON formatting in DBeaver" /></p>
<p>More in link below. Enjoy!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://dbeaver.com/docs/wiki/Working-with-XML-and-JSON/">https://dbeaver.com/docs/wiki/Working-with-XML-and-JSON/</a></li>
</ul>
How not to securely erase a NVME drive2022-10-10T00:00:00+00:002022-10-11T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/how-not-to-securely-erase-nvme-drive/<p>I got a replacement for my Samsung MZVLW256HEHP-000L7 NVMe 256GB M.2 PCI
Express X4 SSD, known also simply as Samsung PM961. It is an OEM part.
After replacing it with the new one, Samsung 980 1TB, I put the old one on
sale. This was my daily driver, so I did not want any meaningful data to be
recoverable from it. So I connected it to the computer with the USB to NVME
M.2 converter (AXAGON EEM2-SG2), which by the way I can now recommend (no
affiliate link, sorry) and started the old, magnetic HDD type of secure
data erase, using <code>shred</code> utility:</p>
<blockquote>
<p><strong>Caution:</strong> the following command(s) will IRREVERSIBLY destroy your
data. Proceed only when you understand the implications!</p>
</blockquote>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">shred -vfz</span><span> /dev/sdX
</span></code></pre>
<p>But stop! The <code>shred</code> or similar utility like <code>dd</code>
<a href="https://unix.stackexchange.com/questions/593181/is-shred-bad-for-erasing-ssds">is not a best way to securely erase SSD</a>!
The thread recommends <code>blkdiscard</code> utility:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">blkdiscard -s</span><span> /dev/sdX
</span></code></pre>
<p>This command on the other hand should not decrease the lifespan of the SSD
so drastically the <code>shred</code> does, but looks like data are still quite
recoverable after (depending on the threat model). It did not work for me
however:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>blkdiscard: /dev/nvme0n1: BLKSECDISCARD ioctl failed: Operation not supported
</span></code></pre>
<p>Looking around the Internet it then suddenly became apparent to me that
there are at least a dozen ways to "securely and properly erase an NVME
SSD".</p>
<p>Starting with this <a href="https://unix.stackexchange.com/a/553173/109352">answer</a>
talking about the way ATA SSD drives can be securely erased quickly just by
changing the encryption password using the <code>hdparm</code> utility:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">hdparm --user-master</span><span> u</span><span style="color:#bf616a;"> --security-set-pass</span><span> hunter1 /dev/sdX
</span><span style="color:#bf616a;">hdparm --user-master</span><span> u</span><span style="color:#bf616a;"> --security-erase</span><span> hunter1 /dev/sdX
</span></code></pre>
<p>This failed, but I expected it, as my drive was NVME SSD, not SATA SSD. The
output for the record:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>Issuing SECURITY_SET_PASS command, password="hunter1", user=user, mode=high
</span><span>SECURITY_SET_PASS: Inappropriate ioctl for device
</span></code></pre>
<p>However, this got me on the track, as I had no idea about this whole
"encryption password" rabbit hole. Searching further led me to another
<a href="https://askubuntu.com/a/1310876/350681">answer</a> explaining the same
process, but for NVME drives using the <code>nvme-cli</code> utility. Exactly what I
needed. A quick glance at the options:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> pacman install nvme-cli
</span><span style="color:#bf616a;">man</span><span> nvme format
</span><span style="color:#bf616a;">man</span><span> nvme sanitize
</span></code></pre>
<p>This is probably what I needed - securely erase the drive, resetting the
drive's cryptographic password during the process (<code>-s2</code>):</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">nvme</span><span> list
</span><span style="color:#bf616a;">nvme</span><span> format</span><span style="color:#bf616a;"> -s2</span><span> /dev/nvme0n1
</span></code></pre>
<p>Running NVME sanitize would probably be even better option as it appears to
also clear any caches, not just the data in the namespace, but I would
definitely need more time studying both. Also, I could not even run in
properly, getting complains about bad sanitize argument. Consider doing
your own research.</p>
<p>The <code>nvme</code> tool however failed right on the step 1 listing the devices
using <code>nvme list</code>. The reason is that the USB to NVME M.2 converter
probably does not implement all the required commands, as hinted in this
<a href="https://superuser.com/questions/1718993/securely-erase-nvme-ssd-that-is-connected-via-usb-converter#comment2653196_1718993">comment</a>.</p>
<p>So I put the old NVME back into the laptop and booted a live Linux image
from the USB. Now, the NVME connected over native PCIe lanes without any
USB converted in the way, the <code>nvme list</code> command properly recognized the
drive, outputting detailed information about Node, Generic, SN, Model,
Namespace, Usage, Format and FW Revision of the PM961 drive.</p>
<p>The Debian live did not complain, but the Arch live, probably due to a
newer version of <code>nvme-cli</code> complained a little bit before outputting the
contents of the <code>nvme list</code>, but proceeded as well:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>nvme0: Identify(0x6), Invalid Field in Command (sct 0x0 / sc 0x2)
</span></code></pre>
<p>Trying the <code>nvme format</code> command again failed spectacularly in Debian live:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>NVMe status: INVALID_OPCODE: The associated command opcode field in not valid(0x2001)
</span></code></pre>
<p>In Arch live, the error was, as expected, even a little bit more verbose:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>nvme0: Format NVM(0x80), Invalid Command Opcode (sct 0x0 / sc 0x1)
</span><span>NVMe status: Invalid Command Opcode: A reserved coded value or an unsupported value in the command opcode field(0x2001)
</span></code></pre>
<p>Ouch. It is noted
<a href="https://github.com/linux-nvme/nvme-cli/issues/627#issuecomment-569685237">here</a>
and
<a href="https://github.com/linux-nvme/nvme-cli/issues/84#issuecomment-320353456">here</a>
that it is a known bug for a Samsung PM951 and PM961 and s simple suspend
should resolve the issue. I was happy a for a brief moment. Sadly, somehow
this has did not work for me, yet again. There was no change in the
behavior From here on, the ride was a steep downhill.</p>
<p>Other suggestions in the above two GitHub threads were to use a
<a href="https://support.lenovo.com/sk/sk/downloads/ds019026">Lenovo EFI application</a>
which is a bootable image that works on ThinkPads (I still rock the trusty
T470 at the time of writing) and is meant to erase a cryptographic key on
the SSD. I was not able to boot this piece of software by any means, not in
UEFI mode, nor in Legacy BIOS mode, nor in any other combination that came
to my mind (there is note about it being supported only in UEFI only or
UEFI First boot mode).</p>
<p>Another option is to use
<a href="https://pcsupport.lenovo.com/sk/sk//downloads/ds119265">Lenovo NVME Firmware Utility</a>
which is for Windows (but I can dual-boot from mSATA PCIex1 Transcend 430S
512GB internal SSD drive), but following these
<a href="https://gist.github.com/klingtnet/22ab0b907e2d9d20f98c72c93ea5dd37">instructions</a>
it appears the updater utility could even be run on Arch or other Linux
distribution (again, not tested yet). Trying this on Windows, it correctly
identified both Transcend 430S and Samsung PM961 to be present, but it did
not offer a Firmware update for either. So no luck here.</p>
<p>As a last resort, I tried the
<a href="https://www.cyberciti.biz/faq/upgrade-update-samsung-ssd-firmware/">easier firmware upgrade option</a>,
a <code>fwupdmgr</code> which is part of the <code>fwupd</code>
<a href="https://archlinux.org/packages/?name=fwupd">package</a> but, as I expected,
it did not pick the Samsung PM961 SSD for an update. It did however updated
my Intel Management Engine and also System Firmware, which I assume was
BIOS, as the next reboot updated the BIOS with a new version containing
breaking changes to EFI (that were mentioned during the install process),
which in turn required me to reinstall GRUB from live Arch. Living on the
edge.</p>
<p>This is the end. In the beginning I thought it would be a simple device
formatting and here I am in the middle of the night with the drive that is
still not prepared to be handed to some stranger. Will probably go for the
<code>shred</code> and hope for the best. Wish me luck!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://askubuntu.com/a/1310876/350681">https://askubuntu.com/a/1310876/350681</a></li>
<li><a href="https://askubuntu.com/a/1388682/350681">https://askubuntu.com/a/1388682/350681</a></li>
<li><a href="https://gist.github.com/klingtnet/22ab0b907e2d9d20f98c72c93ea5dd37">https://gist.github.com/klingtnet/22ab0b907e2d9d20f98c72c93ea5dd37</a></li>
<li><a href="https://github.com/linux-nvme/nvme-cli/issues/627">https://github.com/linux-nvme/nvme-cli/issues/627</a></li>
<li><a href="https://github.com/linux-nvme/nvme-cli/issues/84">https://github.com/linux-nvme/nvme-cli/issues/84</a></li>
<li><a href="https://opensource.com/article/21/9/nvme-cli">https://opensource.com/article/21/9/nvme-cli</a></li>
<li><a href="https://pcsupport.lenovo.com/sk/sk/downloads/ds119265">https://pcsupport.lenovo.com/sk/sk/downloads/ds119265</a></li>
<li><a href="https://superuser.com/questions/1718993/securely-erase-nvme-ssd-that-is-connected-via-usb-converter#comment2653196_1718993">https://superuser.com/questions/1718993/securely-erase-nvme-ssd-that-is-connected-via-usb-converter#comment2653196_1718993</a></li>
<li><a href="https://support.lenovo.com/sk/sk/downloads/ds019026">https://support.lenovo.com/sk/sk/downloads/ds019026</a></li>
<li><a href="https://unix.stackexchange.com/a/553173/109352">https://unix.stackexchange.com/a/553173/109352</a></li>
<li><a href="https://unix.stackexchange.com/questions/593181/is-shred-bad-for-erasing-ssds">https://unix.stackexchange.com/questions/593181/is-shred-bad-for-erasing-ssds</a></li>
<li><a href="https://www.cyberciti.biz/faq/upgrade-update-samsung-ssd-firmware/">https://www.cyberciti.biz/faq/upgrade-update-samsung-ssd-firmware/</a></li>
<li><a href="https://www.freecodecamp.org/news/securely-erasing-a-disk-and-file-using-linux-command-shred/">https://www.freecodecamp.org/news/securely-erasing-a-disk-and-file-using-linux-command-shred/</a></li>
</ul>
Restore data from Gitea restic backup2022-10-03T00:00:00+00:002022-10-03T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/restore-data-from-gitea-restic-backup/<p>Just a very quick update about how I was able to restore a git repository
stored inside a Gitea and backed up via restic.</p>
<blockquote>
<p><strong>Warning:</strong> this guide most probably does not work correctly with LFS
files, but they might either not be that critical to restore or the
repository might not even use LFS in the first place.</p>
</blockquote>
<p>Restore the bare repository:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">restic --repo</span><span> path/to/repository restore latest</span><span style="color:#bf616a;"> --target</span><span> restored</span><span style="color:#bf616a;"> --include</span><span> /path/to/gitea/data/git/repositories/peter.babic/MY-REPOSITORY.git
</span></code></pre>
<p>Copy the repository here and step into it:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">cp -r</span><span> restored/path/to/gitea/data/git/repositories/peter.babic/MY-REPOSITORY.git .
</span><span style="color:#96b5b4;">cd</span><span> MY-REPOSITORY.git
</span></code></pre>
<p>Create a <code>.git</code> folder inside:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">mkdir</span><span> .git
</span></code></pre>
<p>Move everything into that folder (assuming <code>zsh</code>):</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">setopt</span><span> extendedglob
</span><span style="color:#bf616a;">mv</span><span> ^.git .git
</span><span style="color:#bf616a;">unsetopt</span><span> extendedglob
</span></code></pre>
<p>Reset index, as everything will show up as deleted and staged:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> reset</span><span style="color:#bf616a;"> --hard
</span></code></pre>
<p>Optionally, fix origin:</p>
<blockquote>
<p>Edit <code>.git/config</code> file adding line
<code>fetch = +refs/heads/*:refs/remotes/origin/*</code> after <code>url = <...></code> in
<code>[remote "origin"]</code> section.</p>
</blockquote>
<p>Enjoy!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://forum.restic.net/t/restoring-one-directory-from-backup-restores-everything/2390/2">https://forum.restic.net/t/restoring-one-directory-from-backup-restores-everything/2390/2</a></li>
<li><a href="https://stackoverflow.com/a/10637882/1972509">https://stackoverflow.com/a/10637882/1972509</a></li>
<li><a href="https://unix.stackexchange.com/a/567986/109352">https://unix.stackexchange.com/a/567986/109352</a></li>
</ul>
Throttle with ReCaptcha Laravel middleware2022-08-04T00:00:00+00:002022-08-04T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/throttle-with-recaptcha-laravel-middleware/<p>After a few days of struggling I have found a few-lines long solution to
the problem of how to show ReCaptcha after a few hits on the endpoint. It
is useful for for instance for payment gateway integration, where this way
you make sure attacker is not abusing your app to find out which card
numbers are real and which not. Requiring a ReCaptcha after a few
successive hits in a short amount of time greatly reduces this attack
vector. Let's take a look, assuming Laravel 8:</p>
<p>The middleware class simply overrides the
<a href="https://laravel.com/api/8.x/Illuminate/Routing/Middleware/ThrottleRequests.html#method_handle"><code>handle</code> method</a>
of the standard <code>\Illuminate\Routing\Middleware\ThrottleRequests</code>
throttling middleware:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span>
</span><span style="color:#b48ead;">namespace </span><span>App\Middleware;
</span><span>
</span><span style="color:#b48ead;">use </span><span>App\Http\Middleware\</span><span style="color:#ebcb8b;">MyReCaptchaMiddleware</span><span>;
</span><span style="color:#b48ead;">use </span><span style="color:#ebcb8b;">Closure</span><span>;
</span><span style="color:#b48ead;">use </span><span>Illuminate\Routing\Middleware\</span><span style="color:#ebcb8b;">ThrottleRequests</span><span>;
</span><span>
</span><span style="color:#b48ead;">class </span><span style="color:#ebcb8b;">MyReCaptchaThrottleRequests </span><span style="color:#b48ead;">extends </span><span style="color:#a3be8c;">ThrottleRequests
</span><span style="color:#eff1f5;">{
</span><span style="color:#eff1f5;"> </span><span style="color:#65737e;">/**
</span><span style="color:#65737e;"> * Handle an incoming request.
</span><span style="color:#65737e;"> *
</span><span style="color:#65737e;"> * </span><span style="color:#b48ead;">@param</span><span style="color:#65737e;"> \Illuminate\Http\Request $request
</span><span style="color:#65737e;"> * </span><span style="color:#b48ead;">@param</span><span style="color:#65737e;"> \Closure $next
</span><span style="color:#65737e;"> * </span><span style="color:#b48ead;">@param</span><span style="color:#65737e;"> int|string $maxAttempts
</span><span style="color:#65737e;"> * </span><span style="color:#b48ead;">@param</span><span style="color:#65737e;"> float|int $decayMinutes
</span><span style="color:#65737e;"> * </span><span style="color:#b48ead;">@param</span><span style="color:#65737e;"> string $prefix
</span><span style="color:#65737e;"> * </span><span style="color:#b48ead;">@return</span><span style="color:#65737e;"> \Symfony\Component\HttpFoundation\Response
</span><span style="color:#65737e;"> *
</span><span style="color:#65737e;"> * </span><span style="color:#b48ead;">@throws</span><span style="color:#65737e;"> \Illuminate\Http\Exceptions\ThrottleRequestsException
</span><span style="color:#65737e;"> */
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">public function </span><span style="color:#8fa1b3;">handle</span><span style="color:#eff1f5;">(</span><span>$</span><span style="color:#bf616a;">request</span><span style="color:#eff1f5;">, </span><span style="color:#ebcb8b;">Closure </span><span>$</span><span style="color:#bf616a;">next</span><span style="color:#eff1f5;">, </span><span>$</span><span style="color:#bf616a;">maxAttempts </span><span>= </span><span style="color:#d08770;">60</span><span style="color:#eff1f5;">, </span><span>$</span><span style="color:#bf616a;">decayMinutes </span><span>= </span><span style="color:#d08770;">1</span><span style="color:#eff1f5;">, </span><span>$</span><span style="color:#bf616a;">prefix </span><span>= ''</span><span style="color:#eff1f5;">)
</span><span style="color:#eff1f5;"> {
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">return </span><span>$</span><span style="color:#bf616a;">this</span><span style="color:#eff1f5;">-></span><span style="color:#bf616a;">handleRequest</span><span style="color:#eff1f5;">(
</span><span style="color:#eff1f5;"> </span><span>$</span><span style="color:#bf616a;">request</span><span style="color:#eff1f5;">,
</span><span style="color:#eff1f5;"> </span><span>$</span><span style="color:#bf616a;">next</span><span style="color:#eff1f5;">,
</span><span style="color:#eff1f5;"> [
</span><span style="color:#eff1f5;"> (</span><span style="color:#b48ead;">object</span><span style="color:#eff1f5;">) [
</span><span style="color:#eff1f5;"> </span><span>'</span><span style="color:#a3be8c;">key</span><span>' => $</span><span style="color:#bf616a;">prefix</span><span>.$</span><span style="color:#bf616a;">this</span><span style="color:#eff1f5;">-></span><span style="color:#bf616a;">resolveRequestSignature</span><span style="color:#eff1f5;">(</span><span>$</span><span style="color:#bf616a;">request</span><span style="color:#eff1f5;">),
</span><span style="color:#eff1f5;"> </span><span>'</span><span style="color:#a3be8c;">maxAttempts</span><span>' => $</span><span style="color:#bf616a;">this</span><span style="color:#eff1f5;">-></span><span style="color:#bf616a;">resolveMaxAttempts</span><span style="color:#eff1f5;">(</span><span>$</span><span style="color:#bf616a;">request</span><span style="color:#eff1f5;">, </span><span>$</span><span style="color:#bf616a;">maxAttempts</span><span style="color:#eff1f5;">),
</span><span style="color:#eff1f5;"> </span><span>'</span><span style="color:#a3be8c;">decayMinutes</span><span>' => $</span><span style="color:#bf616a;">decayMinutes</span><span style="color:#eff1f5;">,
</span><span style="color:#eff1f5;"> </span><span>'</span><span style="color:#a3be8c;">responseCallback</span><span>' => </span><span style="color:#b48ead;">fn</span><span style="color:#eff1f5;">() => </span><span style="color:#bf616a;">app</span><span style="color:#eff1f5;">(</span><span style="color:#ebcb8b;">MyReCaptchaMiddleware</span><span style="color:#eff1f5;">::</span><span style="color:#d08770;">class</span><span style="color:#eff1f5;">)-></span><span style="color:#bf616a;">handle</span><span style="color:#eff1f5;">(</span><span>$</span><span style="color:#bf616a;">request</span><span style="color:#eff1f5;">, </span><span>$</span><span style="color:#bf616a;">next</span><span style="color:#eff1f5;">),
</span><span style="color:#eff1f5;"> ],
</span><span style="color:#eff1f5;"> ]
</span><span style="color:#eff1f5;"> );
</span><span style="color:#eff1f5;"> }
</span></code></pre>
<p>The above assumes that you already have your own <code>MyReCaptchaMiddleware</code> up
and running, we won't go into details of it here. There are many guides
already. It will be called via the <code>responseCallback</code>, which in the parent
function is normally <code>null</code>. Most magic happens basically at this very
line.</p>
<p>Next
<a href="https://laravel.com/docs/8.x/middleware#assigning-middleware-to-routes">add</a>
the route middleware inside the <code>app/Http/Kernel.php</code> like this:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span>protected $routeMiddleware = [
</span><span> // ...
</span><span> 'throttle_recaptcha' => MyReCaptchaThrottleRequests::class,
</span><span>];
</span></code></pre>
<p>The last step is to actually assign the <code>throttle_recaptcha</code> middleware to
the route:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span>Route::get('/endpoint', function () {
</span><span> //
</span><span>})->middleware('throttle_recaptcha');
</span></code></pre>
<p>Note it is possible to add the <code>maxAttempts</code> optional parameter:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span>Route::get('/endpoint', function () {
</span><span> //
</span><span>})->middleware('throttle_recaptcha:10');
</span></code></pre>
<p>And even a <code>decayMinutes</code>, as a second parameter:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span>Route::get('/endpoint', function () {
</span><span> //
</span><span>})->middleware('throttle_recaptcha:10,2');
</span></code></pre>
<p>The above will require ReCaptcha verication on our endpoint after 10 hits
and will persist Requiring it for 2 straight minutes afterward. That's it.
Note that this will still rate-limit the actual endpoint for all it's
consumers, depending on your needs, it might need more tweaking to throttle
the endpoint per-user! This can be easily done by overriding
<code>resolveRequestSignature()</code>
<a href="https://github.com/laravel/framework/blob/8.x/src/Illuminate/Routing/Middleware/ThrottleRequests.php#L168">method</a>
as well.</p>
<h2 id="named-limiters">Named limiters</h2>
<p>There is a paragraph I purposefully removed from the top parent <code>handle()</code>
function, as you might see. For the record, it's
<a href="https://github.com/laravel/framework/blob/8.x/src/Illuminate/Routing/Middleware/ThrottleRequests.php#L52,56">this one</a>:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span>if (is_string($maxAttempts)
</span><span> && func_num_args() === 3
</span><span> && ! is_null($limiter = $this->limiter->limiter($maxAttempts))) {
</span><span> return $this->handleRequestUsingNamedLimiter($request, $next, $maxAttempts, $limiter);
</span><span>}
</span></code></pre>
<p>The removed code serves for a useful feature, loosely called
<a href="https://laravel.com/docs/8.x/routing#attaching-rate-limiters-to-routes">named limiters</a>
where in the place of <code>$maxAttempts</code> parameter is the string with the name
of the throttle rate limiter:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span>Route::middleware(['throttle:my_rate_limiter'])->group(function () {
</span><span> Route::post('/endpoint', function () {
</span><span> //
</span><span> });
</span><span>});
</span></code></pre>
<p>The above would use the rate limiter
<a href="https://laravel.com/docs/8.x/routing#defining-rate-limiters">definition</a>
like this one:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span>use Illuminate\Cache\RateLimiting\Limit;
</span><span>use Illuminate\Support\Facades\RateLimiter;
</span><span>
</span><span>protected function configureRateLimiting()
</span><span>{
</span><span> RateLimiter::for('my_rate_limiter', function (Request $request) {
</span><span> return Limit::perMinute(1000);
</span><span> });
</span><span>}
</span></code></pre>
<p>But since we rather use the provided <code>$maxAttempts</code> and <code>$decayMinutes</code>
middleware parameters of the throttle middleware only for their intended
purpose, we can safely omit that block. Also, for now hopefully obvious
reasons, it is not possible to use parameters with named limiters.</p>
<p>If you intend to use named limiters also with the <code>throttle_recaptcha</code>
middleware, you can keep that block in. It won't hurt at all. I found it's
code confusing as <code>$maxAttempts</code> parameter is definitely not a good name
for the actual max attempts as well as the name of the rate limiter. I
suspect that the named limiters feature was probably added in as
afterthought. I did not want to confuse my colleagues during the code
review further, so I omitted it. But it is a useful feature nevertheless,
so keep that in mind. Enjoy!</p>
<h2 id="rate-limiter-headers">Rate limiter headers</h2>
<p>There are
<a href="https://github.com/laravel/framework/blob/8.x/src/Illuminate/Routing/Middleware/ThrottleRequests.php#L252">two headers</a>
that are added to the rate limited endpoints:</p>
<ul>
<li><code>X-RateLimit-Limit</code></li>
<li><code>X-RateLimit-Remainig</code></li>
</ul>
<p>It's the latter I am using to determine when to display the ReCaptcha on
the front-end. Simply, if the remaining hits shown in the
<code>X-RateLimit-Remainig</code> response header equals to one, I know that all the
successive requests will require ReCaptcha token, so that is the exact time
to make user (or the automated bot) pass the test, obtaining the token and
attaching it to legitimate successive requests.</p>
<p>As a side note, now the <code>X-RateLimit-Remainig</code> will always see two hits,
due to the issue discussed
<a href="https://laracasts.com/discuss/channels/laravel/throttle-middleware-counting-each-hit-as-two-hits">here</a>
which is unfortunate, but can be worked around.</p>
<p>Enjoy!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://laracasts.com/discuss/channels/laravel/call-a-middleware-from-another-middleware?reply=288672">https://laracasts.com/discuss/channels/laravel/call-a-middleware-from-another-middleware?reply=288672</a></li>
<li><a href="https://laracasts.com/discuss/channels/laravel/how-to-customize-throttle-error?reply=724946">https://laracasts.com/discuss/channels/laravel/how-to-customize-throttle-error?reply=724946</a></li>
<li><a href="https://stackoverflow.com/questions/63873681/laravel-customize-response-headers-when-using-rate-limiting-middleware">https://stackoverflow.com/questions/63873681/laravel-customize-response-headers-when-using-rate-limiting-middleware</a></li>
<li><a href="https://www.codecheef.org/article/how-to-implement-rate-limiting-in-laravel-8">https://www.codecheef.org/article/how-to-implement-rate-limiting-in-laravel-8</a></li>
<li><a href="https://bannister.me/blog/custom-throttle-middleware">https://bannister.me/blog/custom-throttle-middleware</a></li>
<li><a href="https://laracasts.com/discuss/channels/laravel/fortify-rate-throttling-redirecting-opposed-to-error-in-session?reply=696285">https://laracasts.com/discuss/channels/laravel/fortify-rate-throttling-redirecting-opposed-to-error-in-session?reply=696285</a></li>
<li><a href="https://www.cloudways.com/blog/laravel-and-api-rate-limiting/">https://www.cloudways.com/blog/laravel-and-api-rate-limiting/</a></li>
<li><a href="https://dev.to/aliadhillon/new-simple-way-of-creating-custom-rate-limiters-in-laravel-8-65n">https://dev.to/aliadhillon/new-simple-way-of-creating-custom-rate-limiters-in-laravel-8-65n</a></li>
<li><a href="https://stackoverflow.com/questions/66102519/laravel-ratelimiter-throttle-increasing-decay-minutes?rq=1">https://stackoverflow.com/questions/66102519/laravel-ratelimiter-throttle-increasing-decay-minutes?rq=1</a></li>
<li><a href="https://stackoverflow.com/questions/70820870/laravel-rate-limiter-limits-access-wrongly-after-only-one-attempt">https://stackoverflow.com/questions/70820870/laravel-rate-limiter-limits-access-wrongly-after-only-one-attempt</a></li>
<li><a href="https://laraveldaily.com/laravel-too-many-login-attempts-restrict-and-customize/">https://laraveldaily.com/laravel-too-many-login-attempts-restrict-and-customize/</a></li>
<li><a href="https://www.tutorialsbuddy.com/adding-google-recaptcha-in-laravel">https://www.tutorialsbuddy.com/adding-google-recaptcha-in-laravel</a></li>
</ul>
PHP curly syntax for scope resolution is weird2022-06-11T00:00:00+00:002022-06-11T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/php-curly-sytanx-with-scope-resolution/<p>When browsing a PHP documentation, an example at the bottom of the
<a href="https://www.php.net/manual/en/language.types.string.php#language.types.string.parsing.complex">Complex (curly) syntax</a>
section explaining a usage of a
<a href="https://www.php.net/manual/en/language.oop5.paamayim-nekudotayim.php">scope resolution operator</a>
(<code>::</code>) within a curly syntax for strings in PHP caught my eye. Let's look
at it:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span style="color:#65737e;">// Show all errors.
</span><span style="color:#96b5b4;">error_reporting</span><span>(E_ALL);
</span><span>
</span><span style="color:#b48ead;">class </span><span style="color:#ebcb8b;">beers </span><span style="color:#eff1f5;">{
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">const </span><span style="color:#d08770;">softdrink </span><span>= '</span><span style="color:#a3be8c;">rootbeer</span><span>'</span><span style="color:#eff1f5;">;
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">public static </span><span>$</span><span style="color:#bf616a;">ale </span><span>= '</span><span style="color:#a3be8c;">ipa</span><span>'</span><span style="color:#eff1f5;">;
</span><span style="color:#eff1f5;">}
</span><span>
</span><span>$</span><span style="color:#bf616a;">rootbeer </span><span>= '</span><span style="color:#a3be8c;">A & W</span><span>';
</span><span>$</span><span style="color:#bf616a;">ipa </span><span>= '</span><span style="color:#a3be8c;">Alexander Keith</span><span style="color:#96b5b4;">\'</span><span style="color:#a3be8c;">s</span><span>';
</span><span>
</span><span style="color:#65737e;">// This works; outputs: I'd like an A & W
</span><span style="color:#96b5b4;">echo </span><span>"</span><span style="color:#a3be8c;">I'd like an {${</span><span style="color:#ebcb8b;">beers</span><span style="color:#a3be8c;">::</span><span style="color:#d08770;">softdrink</span><span style="color:#a3be8c;">}}</span><span style="color:#96b5b4;">\n</span><span>";
</span><span>
</span><span style="color:#65737e;">// This works too; outputs: I'd like an Alexander Keith's
</span><span style="color:#96b5b4;">echo </span><span>"</span><span style="color:#a3be8c;">I'd like an {${</span><span style="color:#ebcb8b;">beers</span><span style="color:#a3be8c;">::</span><span>$</span><span style="color:#bf616a;">ale</span><span style="color:#a3be8c;">}}</span><span style="color:#96b5b4;">\n</span><span>";
</span></code></pre>
<p>I started to think about the above code excerpt, but it was weird, as I did
not expect to see <strong>A & W</strong> nor <strong>Alexander Keith</strong> in the outputs. For a
brief moment I thought that maybe a documentation is wrong, so I decided to
run the script. What I expected to see was:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>I'd like an rootbeer
</span><span>I'd like an ipa
</span></code></pre>
<p>To be fair, I was not so sure about the <strong>ipa</strong> from the second line, but I
was pretty much sure the first line would should print <strong>rootbeer</strong>. For my
surprise, my PHP 8.1 interpreter matched the comments from the script:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>I'd like an A & W
</span><span>I'd like an Alexander Keith's
</span></code></pre>
<p>To be honest, I have not have not been so confused about PHP in a good
while. Before I run that script I thought for myself: Nice, I found a way
to use a classname, or more importantly, <code>self::</code> and <code>static::</code> to print
constants inside a string. Unfortunately, or maybe fortunately as I learned
something new, the script had shown me that I do not understand what is
happening.</p>
<h2 id="reason">Reason</h2>
<p>There is a note in the documentation above the script, so for completeness
I copied it over. It took me at least three re-reads to fully understand,
so take your time in case you are also confused at this point:</p>
<blockquote>
<p><strong>Note:</strong><br />
The value accessed from functions, method calls, static class variables,
and class constants inside {$} will be interpreted as the name of a
variable in the scope in which the string is defined. Using single curly
braces ({}) will not work for accessing the return values of functions or
methods or the values of class constants or static class variables.</p>
</blockquote>
<p>In case it still does not make sense to you, the short answer sadly is:
<strong>It is not possible</strong>. Maybe in future PHP releases. You simply have to
use a
<a href="https://www.php.net/manual/en/language.operators.string.php">string concatenation operator</a>
when referencing a content of a class constant. Or in other words, this is
the only correct way to go:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span>
</span><span style="color:#b48ead;">class </span><span style="color:#ebcb8b;">beers </span><span style="color:#eff1f5;">{
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">const </span><span style="color:#d08770;">softdrink </span><span>= '</span><span style="color:#a3be8c;">rootbeer</span><span>'</span><span style="color:#eff1f5;">;
</span><span style="color:#eff1f5;">}
</span><span>
</span><span style="color:#96b5b4;">echo </span><span>"</span><span style="color:#a3be8c;">I'd like an </span><span>" . </span><span style="color:#ebcb8b;">beers</span><span>::</span><span style="color:#d08770;">softdrink </span><span>. "</span><span style="color:#96b5b4;">\n</span><span>";
</span></code></pre>
<p>On the other hand, you can now write some really hard-to-understand code
the way it was written in the example script from the beginning of the post
and cause some serious headache to your colleagues (or your future self for
that matter). I was joking. Don't do it. Write code that is
easy-to-understand whenever possible! Enjoy.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://stackoverflow.com/questions/47999803/trouble-using-php-complex-curly-syntax-with-static-variables">https://stackoverflow.com/questions/47999803/trouble-using-php-complex-curly-syntax-with-static-variables</a></li>
<li><a href="https://www.php.net/manual/en/language.types.string.php#language.types.string.parsing.complex">https://www.php.net/manual/en/language.types.string.php#language.types.string.parsing.complex</a></li>
<li><a href="https://www.php.net/manual/en/language.oop5.paamayim-nekudotayim.php">https://www.php.net/manual/en/language.oop5.paamayim-nekudotayim.php</a></li>
<li><a href="https://www.php.net/manual/en/language.operators.string.php">https://www.php.net/manual/en/language.operators.string.php</a></li>
</ul>
Setting up Tesseract on Ubuntu 18.042022-05-07T00:00:00+00:002022-05-07T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/setting-up-tesseract-on-ubuntu-28-04/<p>I was presented with a task to quickly prepare a bare-metal Linux machine
to run Tesseract OCR for optical character recognition task. The criterion
was to to the recognition as fast as possible. In my tests, the Tesseract 5
was far more precise than Tesseract 4, no matter what options,
configurations and pre-processing I tried (and I tried a lot of them!). The
other requirement was to use Ubuntu 18.04. LTS. It is not the latest, but
still pretty support-ready (Apr 2028). I have gathered some notes from the
process, so I thought I share them. Maybe they can help someone.</p>
<h2 id="hp-290-g4-manual">HP 290 G4 manual</h2>
<p>For the task I obtained the HP 290 G4 station in the midi-tower, as the
off-the-shelf part, and little bit in a hurry. Tesseract needs mostly CPU
and this came with a relatively recent multi-threading Intel i3 processor,
which later proved a good investment.</p>
<p>However, I had problems to get Ubuntu there. Relevant excerpts from the HP
290 G4 <a href="https://peterbabic.dev/blog/setting-up-tesseract-on-ubuntu-28-04/./assets/HP_290_G4_manual.pdf">manual</a>:</p>
<ul>
<li>Turn on or restart the computer or tablet, quickly press <code>Esc</code>, and then
press <code>F9</code> for boot options.</li>
<li>Turn on or restart the computer, and when the HP logo appears, press
<code>F10</code> to enter Computer Setup (UEFI BIOS).</li>
</ul>
<p>The keys are not too common based on my previous experience - I was
expecting <code>F12</code> and <code>F2</code> respectively.</p>
<h2 id="install-openssh">Install OpenSSH</h2>
<p>Ubuntu apparently came without SSH enabled by default
<a href="https://linuxize.com/post/how-to-enable-ssh-on-ubuntu-18-04/">source</a>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> apt install openssh-server
</span><span style="color:#bf616a;">sudo</span><span> systemctl enable ssh.service</span><span style="color:#bf616a;"> --now
</span></code></pre>
<h2 id="node-16-lts">Node 16 LTS</h2>
<p>The <code>node</code> in the repository was version 8. Version 16 LTS was needed
<a href="https://computingforgeeks.com/how-to-install-node-js-on-ubuntu-debian/">source</a>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">curl -sL</span><span> https://deb.nodesource.com/setup_16.x | </span><span style="color:#bf616a;">sudo</span><span> bash -
</span><span style="color:#bf616a;">sudo</span><span> apt</span><span style="color:#bf616a;"> -y</span><span> install nodejs
</span></code></pre>
<h2 id="node-red">Node-red</h2>
<p>To install node-red [source](sudo npm install -g --unsafe-perm node-red):</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> npm install</span><span style="color:#bf616a;"> -g --unsafe-perm</span><span> node-red
</span></code></pre>
<h2 id="pm2-process-manager-for-node-red">PM2 process manager for node-red</h2>
<p>To make node-red start on boot on Ubuntu (unlike in Raspbian) a custom
solution is needed. PM2 was used
<a href="https://nodered.org/docs/faq/starting-node-red-on-boot">source</a>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> npm install</span><span style="color:#bf616a;"> -g</span><span> pm2
</span><span style="color:#bf616a;">pm2</span><span> start $(</span><span style="color:#bf616a;">which</span><span> node-red) -- -v
</span><span style="color:#bf616a;">pm2</span><span> save
</span><span style="color:#bf616a;">pm2</span><span> startup
</span><span style="color:#65737e;"># Now run the command displayed on the screen
</span></code></pre>
<h3 id="pm2-autostart-on-boot">PM2 autostart on boot</h3>
<p>Important note on error <code>PID file not readable</code>
<a href="https://github.com/Unitech/pm2/issues/2912#issuecomment-368325045">source</a>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pm2</span><span> kill
</span><span style="color:#bf616a;">sudo</span><span> systemctl start pm2-USERNAME.service
</span></code></pre>
<h2 id="tesseract-5-1">Tesseract 5.1</h2>
<p>Install Tesseract OCR 5.1 on Ubuntu 18.04
<a href="https://techviewleo.com/how-to-install-tesseract-ocr-on-ubuntu/">source</a>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> apt update
</span><span style="color:#bf616a;">sudo</span><span> add-apt-repository ppa:alex-p/tesseract-ocr-devel
</span><span style="color:#bf616a;">sudo</span><span> apt install</span><span style="color:#bf616a;"> -y</span><span> tesseract-ocr
</span></code></pre>
<h2 id="tessdata-ocrb">tessdata_ocrb</h2>
<p>The data trained specifically for the font used in the ID and Passports.
This is the same font that was used in this product as well. The speedup
was very noticeable!</p>
<p><a href="https://github.com/Shreeshrii/tessdata_ocrb">https://github.com/Shreeshrii/tessdata_ocrb</a></p>
<p>Use like this:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">TESSDATA_PREFIX</span><span>=</span><span style="color:#a3be8c;">/home/peterbabic/tessdata_ocrb </span><span style="color:#bf616a;">tesseract</span><span> /home/peterbabic/PHOTO.jpg -</span><span style="color:#bf616a;"> -l</span><span> ocrb_int
</span></code></pre>
<h2 id="pc-speaker-clicking-sounds">PC speaker clicking sounds</h2>
<p>After Ubuntu was started the PC speaker was making a very annoying periodic
"click" sound. The problem went away for a moment when the volume was
adjusted or when some some music was started. The problem did not go away
when PC speaker was disabled in BIOS, which was <em>very</em> surprising. The
problem was also not present when in BIOS, GRUB or in pre-installed FreeDOS
environment (which I found is probably built on top of Debian). The problem
was in the power saving options
<a href="https://askubuntu.com/questions/175602/periodic-clicking-sound-from-pc-speaker#comment2013171_195800">source</a>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#96b5b4;">echo </span><span>'</span><span style="color:#a3be8c;">options snd-hda-intel power_save=0</span><span>' | </span><span style="color:#bf616a;">sudo</span><span> tee /etc/modprobe.d/alsa-info.conf
</span></code></pre>
<h2 id="other-links">Other Links</h2>
<ul>
<li><a href="https://github.com/manisandro/gImageReader">https://github.com/manisandro/gImageReader</a></li>
<li><a href="https://github.com/hertzg/tesseract-server">https://github.com/hertzg/tesseract-server</a></li>
<li><a href="https://nanonets.com/blog/ocr-with-tesseract/">https://nanonets.com/blog/ocr-with-tesseract/</a></li>
</ul>
Laravel validation XOR - Exclusive OR2022-03-06T00:00:00+00:002024-03-04T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/laravel-validation-xor-exclusive-or/<p>I needed to create a validation rule in Laravel that would accept either of
the two inputs, but not both and at least one had to be supplied. In other
words, I had to apply the XOR logic operation on them:</p>
<table><thead><tr><th>A</th><th>B</th><th>XOR</th></tr></thead><tbody>
<tr><td>0</td><td>0</td><td>0</td></tr>
<tr><td>0</td><td>1</td><td>1</td></tr>
<tr><td>1</td><td>0</td><td>1</td></tr>
<tr><td>1</td><td>1</td><td>0</td></tr>
</tbody></table>
<p>The search results did offer many solutions but I did not like any of them
in particular, mostly due to their complexity or lack of clear
re-usability. The closer I could get while keeping complexity at minimum
was to use the <code>required_without</code> Laravel validation rule like this:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span>
</span><span style="color:#b48ead;">public function </span><span style="color:#8fa1b3;">rules</span><span>(): </span><span style="color:#b48ead;">array
</span><span>{
</span><span> </span><span style="color:#b48ead;">return </span><span>[
</span><span> </span><span style="color:#d08770;">input1 </span><span>=> [
</span><span> '</span><span style="color:#a3be8c;">required_without:input2</span><span>',
</span><span> ],
</span><span> </span><span style="color:#d08770;">input2 </span><span>=> [
</span><span> '</span><span style="color:#a3be8c;">required_without:input1</span><span>',
</span><span> ],
</span><span> ];
</span><span>}
</span></code></pre>
<p>However this was still not sufficient as this is not a XOR operation, this
is a plain OR operation:</p>
<table><thead><tr><th>A</th><th>B</th><th>OR</th></tr></thead><tbody>
<tr><td>0</td><td>0</td><td>0</td></tr>
<tr><td>0</td><td>1</td><td>1</td></tr>
<tr><td>1</td><td>0</td><td>1</td></tr>
<tr><td>1</td><td>1</td><td><strong>1</strong></td></tr>
</tbody></table>
<p>If both inputs are supplied, the validator would happily accept them.
Thinking a little bit I found that there is also a <code>prohibits</code> Laravel
validation operator, so I applied it:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span>
</span><span style="color:#b48ead;">public function </span><span style="color:#8fa1b3;">rules</span><span>(): </span><span style="color:#b48ead;">array
</span><span>{
</span><span> </span><span style="color:#b48ead;">return </span><span>[
</span><span> </span><span style="color:#d08770;">input1 </span><span>=> [
</span><span> '</span><span style="color:#a3be8c;">required_without:input2</span><span>',
</span><span> '</span><span style="color:#a3be8c;">prohibits:input2</span><span>',
</span><span> ],
</span><span> </span><span style="color:#d08770;">input2 </span><span>=> [
</span><span> '</span><span style="color:#a3be8c;">required_without:input1</span><span>',
</span><span> '</span><span style="color:#a3be8c;">prohibits:input1</span><span>',
</span><span> ],
</span><span> ];
</span><span>}
</span></code></pre>
<p>This works as expected. The XOR operation is applied to both <code>input1</code> and
<code>input2</code>.</p>
<h2 id="validation-messages">Validation messages</h2>
<p>The only drawback was the validation message returned:</p>
<pre data-lang="json" style="background-color:#2b303b;color:#c0c5ce;" class="language-json "><code class="language-json" data-lang="json"><span>{
</span><span> "</span><span style="color:#a3be8c;">message</span><span>": "</span><span style="color:#a3be8c;">The given data was invalid.</span><span>",
</span><span> "</span><span style="color:#a3be8c;">errors</span><span>": {
</span><span> "</span><span style="color:#a3be8c;">qrcode</span><span>": ["</span><span style="color:#a3be8c;">validation.prohibits</span><span>"],
</span><span> "</span><span style="color:#a3be8c;">code</span><span>": ["</span><span style="color:#a3be8c;">validation.prohibits</span><span>"]
</span><span> }
</span><span>}
</span></code></pre>
<p>The above means that the message is simply not supplied and its "path"
returned. Filling in the path <code>validation.prohibits</code> with a string value
would return an error messages. This is a core part of the Laravel
validator, see links for documentation. To reduce the code repetition, the
result could look like this:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span>
</span><span style="color:#b48ead;">public function </span><span style="color:#8fa1b3;">messages</span><span>(): </span><span style="color:#b48ead;">array
</span><span>{
</span><span> </span><span style="color:#b48ead;">return </span><span>[
</span><span> ...$</span><span style="color:#bf616a;">this</span><span>-></span><span style="color:#bf616a;">customMessage</span><span>('</span><span style="color:#a3be8c;">input1</span><span>', '</span><span style="color:#a3be8c;">input2</span><span>'),
</span><span> ...$</span><span style="color:#bf616a;">this</span><span>-></span><span style="color:#bf616a;">customMessage</span><span>('</span><span style="color:#a3be8c;">input2</span><span>', '</span><span style="color:#a3be8c;">input1</span><span>'),
</span><span> ];
</span><span>}
</span><span>
</span><span style="color:#b48ead;">private function </span><span style="color:#8fa1b3;">customMessage</span><span>(</span><span style="color:#b48ead;">string </span><span>$</span><span style="color:#bf616a;">input</span><span>, </span><span style="color:#b48ead;">string </span><span>$</span><span style="color:#bf616a;">otherInput</span><span>): </span><span style="color:#b48ead;">array
</span><span>{
</span><span> </span><span style="color:#b48ead;">return </span><span>[
</span><span> "$</span><span style="color:#bf616a;">input</span><span style="color:#a3be8c;">.prohibits</span><span>" => "</span><span style="color:#a3be8c;">The </span><span>$</span><span style="color:#bf616a;">input</span><span style="color:#a3be8c;"> field is prohibited when </span><span>$</span><span style="color:#bf616a;">otherInput</span><span style="color:#a3be8c;"> is present.</span><span>"
</span><span> ];
</span><span>}
</span></code></pre>
<p>Hope this helps. Enjoy!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://stackoverflow.com/a/26547602/1972509">https://stackoverflow.com/a/26547602/1972509</a></li>
<li><a href="https://laravel.com/docs/8.x/validation#customizing-the-error-messages">https://laravel.com/docs/8.x/validation#customizing-the-error-messages</a></li>
<li><a href="https://laravel.com/docs/8.x/validation#rule-prohibits">https://laravel.com/docs/8.x/validation#rule-prohibits</a></li>
<li><a href="https://laravel.com/docs/8.x/validation#rule-required-without">https://laravel.com/docs/8.x/validation#rule-required-without</a></li>
</ul>
Reset MS Teams for Linux2022-03-05T00:00:00+00:002022-03-05T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/reset-ms-teams-for-linux/<p>For a long time (a year and a half) I could not start the MS Teams for
linux as an applicaiton from AUR, as a package
<a href="https://aur.archlinux.org/packages/teams">teams</a>. I had this error
presented to me all the time.</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>This version of Teams only supports work or school accounts managed by an organization.
</span></code></pre>
<p>As with anything Microsoft on Linux, I thought it is just not supported,
shrugged and used the web version or used <a href="https://meet.jit.si/">Jitsi</a>
Meet instead, because it is far more open-source friendly. However, in the
environment I currently operate, the Teams is vastly preferred. One feature
that the web based Teams do not offer over the app based one is taking
control over the other side, something like a remote access. Although this
feature is more into "nice to have" category, it nevertheless makes remote
collaboration much more bearable.</p>
<p>Looking around, one can quickly find out that the
<a href="https://aur.archlinux.org/packages/teams">teams</a> package in AUR has a
staggering amount of votes for an AUR package - something not so commonly
seen, suggesting it probably works for many users. Alongside it's number of
votes, it also boasts a fair share of user comments. Not a single one
however mentions this particular message.</p>
<p>What's more, it proved quite hard to search anything relevant to that error
message from above, that's been bugging me for so long. There are mostly no
relevant results in multiple search engines. All this made me think that
maybe the problem isn't Microsoft not supporting Teams on Linux but a
problem with my configuration instead.</p>
<h2 id="solution">Solution</h2>
<p>As it turned out, the problem was in fact in my configuration, probably
some old files in the <code>.config</code> directory. Solved by:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">rm -rf ~</span><span>/.config/Microsoft/Microsoft</span><span style="color:#96b5b4;">\ </span><span>Teams
</span></code></pre>
<p>Unfortunately there is no streamlined, automated or officially supported
process to clean the dotfile folders. Keeping the <code>/home</code> folders for too
long thus results in problems like these. Take this a reminder to manually
clean your home folder from the cruft from time to time. Enjoy~</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://aur.archlinux.org/packages/teams">https://aur.archlinux.org/packages/teams</a></li>
<li><a href="https://docs.microsoft.com/en-us/answers/questions/569712/teams-won39t-login-on-ubuntu-app.html">https://docs.microsoft.com/en-us/answers/questions/569712/teams-won39t-login-on-ubuntu-app.html</a></li>
</ul>
Excluding file name from vim fzf ripgrep2022-02-27T00:00:00+00:002022-02-27T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/excluding-file-name-from-vim-fzf-ripgrep/<p>Many times, especially with customizable software stacks, there are aspects
or features of the software that we wish they were done differently.
Fortunately, when using open-source software, configuring said software to
do our bidding could be within a few (hundred) keystrokes. This is nothing
new. In fact, it is one of the key selling points of open-source software.</p>
<p>There are times that some software tool we use has some irk that bugs us
for long time, but overall it works reasonably well so we do not think of
spending time learning how to really fix that. This usually goes on until
one of these things happen:</p>
<ol>
<li>We stop using that software</li>
<li>The problem becomes unbearable and we are forced to fix it</li>
<li>We stumble upon the solution and it is surprisingly easy</li>
</ol>
<p>The third point is <em>precisely</em> what I want to present here. Consider the
following screenshot of the <a href="https://github.com/junegunn/fzf.vim">fzf.vim</a>
feature using <a href="https://github.com/BurntSushi/ripgrep">ripgrep</a> to apply
fuzzy search across the project:</p>
<p><img src="https://peterbabic.dev/blog/excluding-file-name-from-vim-fzf-ripgrep/fzf-with-file-names.png" alt="Screenshot showing fzf.vim with ripgrep focusing on a file name without much useful information" /></p>
<p>Now consider the same search but with the file names excluded from the
search results:</p>
<p><img src="https://peterbabic.dev/blog/excluding-file-name-from-vim-fzf-ripgrep/fzf-withhout-file-names.png" alt="Screenshot showing fzf.vim with ripgrep focusing on an actual search term contained within files" /></p>
<p>Infinitely more useful! I still wonder why this is not the default
behavior, but never mind. I actually got fed up with this so I searched for
the fix and it turned out, it is pretty easy:</p>
<pre data-lang="vim" style="background-color:#2b303b;color:#c0c5ce;" class="language-vim "><code class="language-vim" data-lang="vim"><span style="color:#96b5b4;">command</span><span>! -bang -nargs=</span><span style="color:#b48ead;">*</span><span> Rg call </span><span style="color:#8fa1b3;">fzf#vim#grep</span><span>(</span><span style="color:#a3be8c;">"rg --column --line-number --no-heading --color=always --smart-case "</span><span style="color:#b48ead;">.</span><span style="color:#8fa1b3;">shellescape</span><span>(<q-args>), </span><span style="color:#d08770;">1</span><span>, {</span><span style="color:#a3be8c;">'options'</span><span>: </span><span style="color:#a3be8c;">'--delimiter : --nth 4..'</span><span>}, <bang></span><span style="color:#d08770;">0</span><span>)
</span></code></pre>
<p>Put the above in your <code>.vimrc</code> file and you are ready to go! For those that
wounder <em>why</em> or <em>how</em> this works, check the links below. I really wish I
found this way sooner. Hopefully this will somehow help you to make your
vim writing / development a little bit easier. Enjoy!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://dev.to/iggredible/how-to-search-faster-in-vim-with-fzf-vim-36ko">https://dev.to/iggredible/how-to-search-faster-in-vim-with-fzf-vim-36ko</a></li>
<li><a href="https://github.com/junegunn/fzf.vim/issues/346">https://github.com/junegunn/fzf.vim/issues/346</a></li>
<li><a href="https://stackoverflow.com/a/62745519/1972509">https://stackoverflow.com/a/62745519/1972509</a></li>
<li><a href="https://stackoverflow.com/q/59885329/1972509">https://stackoverflow.com/q/59885329/1972509</a></li>
</ul>
Optimize many PDFs at once2022-01-24T00:00:00+00:002022-01-24T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/optimize-many-pdfs-at-once/<p>I had to work with many scanned PDF documents that were saved in a very
un-optimal way, each one having up to tens of MBs. This was not suitable
for sending via email and also unnecessary. I saw an option on my
girlfriend's Mac to <em>optimize PDF</em>. The result was that her document
dropped in size from 3MB to 57kB without any visible drop in quality.</p>
<p>To find out how to do that using a command line on Linux was easy. The
first <a href="https://askubuntu.com/a/256449/350681">StackOverflow result</a> from
search had shown the following use of GhostScript:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">gs -sDEVICE</span><span>=pdfwrite</span><span style="color:#bf616a;"> -dCompatibilityLevel</span><span>=1.4</span><span style="color:#bf616a;"> -dPDFSETTINGS</span><span>=/ebook \
</span><span style="color:#bf616a;"> -dNOPAUSE -dQUIET -dBATCH -sOutputFile</span><span>=output.pdf input.pdf
</span></code></pre>
<p>It worked flawlessly. I needed to run this on many files in the folder, or
in other words, to run in <strong>batch</strong>. I resorted on using <code>xargs</code>.</p>
<blockquote>
<p><strong>Note:</strong> Using <code>xargs -I</code> like explained below can be potentially
dangerous. Read <a href="/blog/markdown-posts-word-count-bash/#links">links</a> in
one of my posts to learn more.</p>
</blockquote>
<p>The <code>gs</code> command adjusted and piped into <code>xargs</code> looks like this:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">find --depth</span><span>=0</span><span style="color:#bf616a;"> -name </span><span>*.pdf | </span><span style="color:#bf616a;">xargs -I </span><span>% gs</span><span style="color:#bf616a;"> --ARGUMENTS </span><span>%
</span></code></pre>
<p>Or it could utilize <code>fd</code> utility with a null character instead of a
newline, via the <code>-0</code> or it's long form, the <code>--print0</code> attribute. This is
the way it was historically combined with <code>xargs</code>, also noted in the
<a href="https://manned.org/fd.1">fd docs</a>:</p>
<blockquote>
<p><code>-0, --print0</code><br />
Separate search results by the null character (instead of newlines).
Useful for piping results to xargs.</p>
</blockquote>
<p>The command then looks like this:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">fd -0 -d1 </span><span>"</span><span style="color:#a3be8c;">\.pdf</span><span>" | </span><span style="color:#bf616a;">xargs -0 -I </span><span>% gs</span><span style="color:#bf616a;"> --ARGUMENTS </span><span>%
</span></code></pre>
<p>On many environments the null character path might not even be necessary,
but is it might be good to know about the connection. The <code>-0</code> on both
sides of the pipe could thus be dropped:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">fd -d1 </span><span>"</span><span style="color:#a3be8c;">\.pdf</span><span>" | </span><span style="color:#bf616a;">xargs -I </span><span>% gs</span><span style="color:#bf616a;"> --ARGUMENTS </span><span>%
</span></code></pre>
<p>Again, whenever using <code>xargs -I</code>, make do a dry runs first (just the
whatever find command you use without piping anything) to be on the safer
side where something nasty does not surprise you as a minimal safety
precaution. And possibly do your own research.</p>
<p>The full command I ended up obtaining batch processed, size optimized PDFs
was this one:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">fd -d1 </span><span>"</span><span style="color:#a3be8c;">\.pdf</span><span>" | </span><span style="color:#bf616a;">xargs -I </span><span>% \
</span><span>gs</span><span style="color:#bf616a;"> -sDEVICE</span><span>=pdfwrite</span><span style="color:#bf616a;"> -dCompatibilityLevel</span><span>=1.4</span><span style="color:#bf616a;"> -dPDFSETTINGS</span><span>=/ebook \
</span><span style="color:#bf616a;"> -dNOPAUSE -dQUIET -dBATCH -sOutputFile</span><span>="</span><span style="color:#a3be8c;">/path/to/output/dir/%</span><span>" %
</span></code></pre>
<p>Might come handy. Enjoy!</p>
Issues restoring Gitea from dump2022-01-10T00:00:00+00:002022-01-10T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/issues-restoring-gitea-from-dump/<p>Somehow, the
<a href="https://docs.gitea.io/en-us/backup-and-restore/">official documentation</a>
for restoring Gitea from dump did not work for me. Roughly, the following
command for the original rootful Docker image could look like this:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">/app/gitea/gitea</span><span> dump</span><span style="color:#bf616a;"> --file</span><span> gitea-dump.zip</span><span style="color:#bf616a;"> -c</span><span> /data/gitea/conf/app.ini</span><span style="color:#bf616a;"> --skip-lfs-data --skip-repository
</span></code></pre>
<p>Restoring to postgres according to the official docs failed:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">psql -U </span><span>$</span><span style="color:#bf616a;">USER -d </span><span>$</span><span style="color:#bf616a;">DATABASE </span><span>< gitea-db.sql
</span></code></pre>
<p>The problem was many missing repositories, missing organizations and many
server 500 errors on issues, pull requests and repositories page.</p>
<p>Restoring this failed also for SQLite, according to the official docs
above. This was done just as a confirmation:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sqlite3 </span><span>$</span><span style="color:#bf616a;">DATABASE_PATH </span><span>< gitea-db.sql
</span></code></pre>
<p>Here the failure was even slightly worse, as the GPG key was not showing in
the UI, although the commits appeared to be signed by a valid signature.
Adding the same GPG key was not possible, stating the key already exists,
yet was not shown anywhere.</p>
<p>I have found one
<a href="https://github.com/go-gitea/gitea/issues/12614#issuecomment-695041945">single comment</a>
over the Internet about how to do it properly. Dumping:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">docker</span><span> exec</span><span style="color:#bf616a;"> -i</span><span> gitea-db-1 /bin/bash</span><span style="color:#bf616a;"> -c </span><span>"</span><span style="color:#a3be8c;">export PGPASSWORD=gitea && /usr/bin/pg_dump -U gitea gitea</span><span>" > dump_DB.sql
</span></code></pre>
<p>And restoring:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">cat</span><span> dump_DB.sql | </span><span style="color:#bf616a;">docker</span><span> exec</span><span style="color:#bf616a;"> -i</span><span> gitea-db-1 psql</span><span style="color:#bf616a;"> -Ugitea
</span></code></pre>
<p>Rsync or copy all the folders containing repositories and other files
manually and restart the container. This worked!</p>
<h2 id="issue-with-dns-resolution">Issue with DNS resolution</h2>
<p>This was a part of migration from rootful Gitea to rootless Gitea server.
Rootless image uses alpine, which is probably the culprit for this error
for Mirrors in the System Notices administration menu:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>Migrate repository from https://github.com/peterbabic/repository-XXX failed: Clone: exit status 128 - fatal: unable to access 'https://github.com/peterbabic/repository-XXX.git/': Could not resolve host: github.com
</span></code></pre>
<p>It can also be seen as:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>Failed to update mirror repository '/var/lib/gitea/git/repositories/peter.babic/repository-XXX.git': fatal: unable to access 'https://github.com/peterbabic/repository-XXX.git/': Could not resolve host: github.com
</span><span>error: Could not fetch origin
</span></code></pre>
<p>The above message is different depending on the source of synchronization -
if it is a cron job or a manually pressing Synchronize button in the
repository settings. Manually mirroring any new repository fails as well.</p>
<p>Entering the container and trying ping confirms the issue:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>$ ping github.com
</span><span>ping: bad address 'github.com'
</span></code></pre>
<p>However pinging some other domains was possible:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>$ ping google.sk
</span><span>PING google.sk (142.251.39.99): 56 data bytes
</span><span>ping: permission denied (are you root?)
</span></code></pre>
<p>And pinging any kind of reachable IP address from the Internet worked,
underpinning the DNS issues. The problem can be further confirmed by the
contents of the <code>resolv.conf</code> file inside that container:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>$ cat /etc/resolv.conf
</span><span>nameserver 127.0.0.11
</span><span>options ndots:0
</span></code></pre>
<p>The contents of the file are obviously different on the host machine,
containing at least one other real DNS server address apart from the
internal Docker one of <code>127.0.0.1</code>.</p>
<h2 id="docker-json-configuration">Docker JSON configuration</h2>
<p>I was able to resolve the DNS issue by editing
<code>~/.config/docker/daemon.json</code> and inserting there the following:</p>
<pre data-lang="json" style="background-color:#2b303b;color:#c0c5ce;" class="language-json "><code class="language-json" data-lang="json"><span>{
</span><span> "</span><span style="color:#a3be8c;">dns</span><span>": ["</span><span style="color:#a3be8c;">8.8.8.8</span><span>"],
</span><span> "</span><span style="color:#a3be8c;">dns-opts</span><span>": ["</span><span style="color:#a3be8c;">ndots:1</span><span>"]
</span><span>}
</span></code></pre>
<p>And then restarting the rootless Docker:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">systemctl --user</span><span> restart docker.service
</span></code></pre>
<p>Accessing the container now makes ping to github.com possible. Mirror
synchronization in Gitea now works. The <code>resolv.conf</code> file now looks just a
little bit different, I expected to find the <code>8.8.8.8</code> IP address there as
well:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>$ cat /etc/resolv.conf
</span><span>nameserver 127.0.0.11
</span><span>options ndots:1
</span></code></pre>
<p>Never mind, this setting will probably be reflected in some other output. I
was able to add this configuration as an ansible task:</p>
<pre data-lang="yaml" style="background-color:#2b303b;color:#c0c5ce;" class="language-yaml "><code class="language-yaml" data-lang="yaml"><span>- </span><span style="color:#bf616a;">name</span><span>: </span><span style="color:#a3be8c;">Edit container 'resolv.conf' options
</span><span> </span><span style="color:#bf616a;">ansble.builtin.blockinfile</span><span>:
</span><span> </span><span style="color:#bf616a;">path</span><span>: </span><span style="color:#a3be8c;">~/.config/docker/daemon.json
</span><span> </span><span style="color:#bf616a;">validate</span><span>: "</span><span style="color:#a3be8c;">python -mjson.tool %s > /dev/null</span><span>"
</span><span> </span><span style="color:#bf616a;">marker</span><span>: "</span><span style="color:#a3be8c;">{mark}</span><span>"
</span><span> </span><span style="color:#bf616a;">marker_begin</span><span>: "</span><span style="color:#a3be8c;">{</span><span>"
</span><span> </span><span style="color:#bf616a;">marker_end</span><span>: "</span><span style="color:#a3be8c;">}</span><span>"
</span><span> </span><span style="color:#bf616a;">create</span><span>: </span><span style="color:#d08770;">true
</span><span> </span><span style="color:#bf616a;">block</span><span>: </span><span style="color:#b48ead;">|
</span><span style="color:#a3be8c;"> "dns": ["8.8.8.8"],
</span><span style="color:#a3be8c;"> "dns-opts": ["ndots:1"]
</span></code></pre>
<p>I found the above works, but might not be bulletproof. Note the validation
part - it should work on most systems without the need to install other
packages. Should the file exist and have some valid JSON inside before
running this task, it will even append the contents into the JSON
structure, but in this case the validation fails as there would be a
missing comma <code>,</code> before the <code>"dns"</code> part. This bit could be solved by some
other task, so leaving it here, in case anyone finds it interesting.</p>
<p>The task definitely works if the file is not existent or contains only
braces on <strong>separate lines</strong>, which could happen as well. The task also
works if the target <code>daemon.json</code> file is empty, in which case however such
configuration makes Docker fail to start, because an empty file is not a
valid JSON. The error is following:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>$ journalctl --user -xeu docker
</span><span>dockerd-rootless.sh[5125]: unable to configure the Docker daemon with file /home/user/.config/docker/daemon.json: EOF
</span></code></pre>
<p>But the <code>daemon.json</code> file could just be empty during the time you run your
playbook, even before starting the Docker service, so I think the task is
still useful. Note that the task also fails when the contents of the file
are just <code>{}</code>, without a newline in between. This is a valid for Docker to
start and can be common though. Enjoy!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://github.com/go-gitea/gitea/issues/12614">https://github.com/go-gitea/gitea/issues/12614</a></li>
<li><a href="https://docs.docker.com/config/containers/container-networking/">https://docs.docker.com/config/containers/container-networking/</a></li>
<li><a href="https://github.com/moby/moby/issues/41003">https://github.com/moby/moby/issues/41003</a></li>
<li><a href="https://github.com/docker/compose/issues/2847#issuecomment-448230151">https://github.com/docker/compose/issues/2847#issuecomment-448230151</a></li>
<li><a href="https://stackoverflow.com/a/47289898/1972509">https://stackoverflow.com/a/47289898/1972509</a></li>
<li><a href="https://docs.gitea.io/en-us/backup-and-restore/">https://docs.gitea.io/en-us/backup-and-restore/</a></li>
<li><a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/blockinfile_module.html">https://docs.ansible.com/ansible/latest/collections/ansible/builtin/blockinfile_module.html</a></li>
</ul>
GnuPG PIN cache, Smartcards, YubiKeys and notifications2022-01-04T00:00:00+00:002022-01-04T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/gnupg-pin-cache-smartcards-yubikeys-and-notifications/<p>I am still obsessed with the OpenPGP smartcard. I know, it is definitely
far inferior to YubiKey. It has far, far less features and it's GnuPG
implementation is even
<a href="/blog/openpgp-smartcard-kdf-issue-bad-pin/">riddled with serious bugs</a>
that can take days to work around. It definitely has it's peak years
behind. But no matter how bad it is, I simply like it's form-factor.</p>
<p>I cannot state this enough. I like how it fits to my wallet, along other
items with similar taxonomy, like credit cards or an
<a href="/blog/using-electronic-id-on-arch-in-slovakia-pt2/">electronic ID card</a>.
It also sticks much less intrusively out of the laptop, neatly and quite
subtly. It is not occupying any USB ports, which is what I hate the most
about YubiKeys. There are million form-factors of YubiKeys and all have to
go to some USB port. When my laptop is in the dock, I have to reach out to
touch it (this is important, we get to this in a moment). When not docked,
it is easy to touch, but sticks out awkwardly and it can
<a href="https://www.reddit.com/r/yubikey/comments/essq12/this_is_why_we_buy_two_everyone_just_because/">result it accidents</a>.</p>
<h2 id="git-rebase-and-automatic-signing">Git rebase and automatic signing</h2>
<p>So I still use both, trying to figure out all the good, bad and ugly parts
of them. Since I invested so much time in setting up GnuPG, I like to have
the automatic commit signing turned on globally. For me personally, it is a
great feeling.</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> config</span><span style="color:#bf616a;"> --global</span><span> commit.gpgSign true
</span><span style="color:#bf616a;">git</span><span> config</span><span style="color:#bf616a;"> --global</span><span> user.signingkey 0xA44B03E642BB42236780FEA43A1381FCF2738E75
</span></code></pre>
<blockquote>
<p><strong>Remember:</strong> Using long, 32-bit key ID might be preferred to potentially
avoid spoofing, collisions or other possible problems with the key. More
details
<a href="https://riseup.net/en/security/message-security/openpgp/best-practices#dont-rely-on-the-key-id">here</a>.</p>
</blockquote>
<p>With the above in place, rebasing (which can be very common in some git
workflows) can result in a situation, where one is asked for a PIN for
every successive operation. This is a default configuration for a GnuPG
smartcard.</p>
<h2 id="gnupg-and-forcesig-option">GnuPG and forcesig option</h2>
<p>The reason why the PIN is asked every time is the <code>forcesig</code> option, that
is set up on the smartcard/device itself, forcing a PIN to be asked every
time when a signature pin in requested, invalidating any cache options in
the agent. Insert a GnuPG compatible device and run:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">gpg --edit-card
</span></code></pre>
<p>This should show <code>Signature PIN ....: forced</code>. Now, you may already know I
use KeePassXC extensively. Take a look at the <a href="/tags/keepass/">keepass</a> tag
if you are interested in some other related or semi-related articles. I
make KeePassXC and (KeePassDX with Syncthing for that matter) an integral
part of mostly anything passwords, passphrases, authentication and security
related.</p>
<p>Setting up auto-type for any kind of windows, prompts and dialogs with
KeePassXC is very easy. Just pressing a global keyboard shortcut once in a
while and having <em>the right</em> PIN filled in is not such a big deal. But when
one have to do it many many times repeatedly, it is annoying. It can be
changed, though.</p>
<p>In the card edit interface, type <code>admin</code>, followed by <code>forcesig</code>. Insert
the admin PIN and type <code>list</code>. You should now see
<code>Signature PIN ....: not forced</code> instead. Suppose I did this change. Now
I've traded security for a convenience. Security in a sense that a
malicious process could now in theory sign something with our signature
key, showing that given piece of code, or even a whole package was released
by me, tricking people into a false trust and thus possibly even running a
malicious code. Or at least this is how I currently understand it. Doing
some more research here would not hurt.</p>
<h2 id="gpg-agent-and-pin-caching">gpg-agent and PIN caching</h2>
<p>Once the PIN is cached via <code>gpg-agent</code>, it is apparently
<a href="https://github.com/drduh/YubiKey-Guide#create-configuration">hard to get it out of the cache</a>,
with the best current solution to unplug the device. Note there is a
<code>ignore-cache-for-signing</code> agent option but I did not find out how or when
to use it. And I know there are <code>default-cache-ttl</code> and <code>max-cache-ttl</code>
agent
<a href="https://www.gnupg.org/(de)/documentation/manuals/gnupg/Agent-Options.html">options</a>,
that should go to <code>~/.gnupg/gpg-agent.conf</code>, but given the sheer amount of
raised issues, they probably do not work as most people would expect. Take
a look into the links section for some threads.</p>
<p>This problem affects OpenPGP smartcard and similar items, even GNUK flash
sticks, the devices without any user input outside of USB communication.
The situation is different with the YubiKey, especially the YubiKey NEO
that has a capacitive touch area.</p>
<h2 id="signing-with-yubikey-and-touch">Signing with YubiKey and touch</h2>
<p>With YubiKey and it's touch capability, the problem can be or mitigated
with a right configuration. This is by design. First install
<code>yubikey-manager</code> package, take a look at the
<a href="https://docs.yubico.com/software/yubikey/tools/ykman/OpenPGP_Commands.html#touch-policies">docs</a>
and consider running the following:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">ykman</span><span> openpgp keys set-touch sig cached
</span></code></pre>
<blockquote>
<p><strong>Warning:</strong> Do not use <code>fixed</code> or <code>cached-fixed</code> policy as it it by
design impossible to revert this setting without a full reset, which is
in case of safe GnuPG application quite a lengthy process. Always start
experimenting with less permanent policy like <code>on</code> or aforementioned
<code>cached</code>. You can always step it up to the fixed policy later when you
are absolutely sure you know what you are doing.</p>
</blockquote>
<p>What this does is that when a signature key is required, PIN is inserted
and YubiKey is flashing it's LED, waiting for a touch. Now every rebase for
the next 15 seconds won't require any user interaction. After 15 seconds,
just another touch is required.</p>
<p>Should a malicious program tried to sign something with our credentials, it
would make me very suspicious and would have hard time getting that touch
out of the blue from me (unless it would run within that 15 second window).</p>
<p>We see, the touch feature is a welcome addition. But it brings another
problem with itself: how to reliably know the device is requesting our
attention and waiting for the touch? Sure, the LED on it is flashing. But
what if the YubiKey is plugged somewhere not readily in the sight, for
instance, in a dock? This is getting us to the situation with referenced at
the beginning of this post.</p>
<h2 id="yubikey-touch-notification-in-gnome">YubiKey touch notification in Gnome</h2>
<p>Yes, there is a
<a href="https://github.com/maximbaz/yubikey-touch-detector">YubiKey touch detector</a>
project aiming at providing the UI with the signal, that the YubiKey
requires a touch. It specifically mentions Arch on it installation guide,
which is nice. In short:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -S</span><span> yubikey-touch-detector
</span><span style="color:#b48ead;">export </span><span style="color:#bf616a;">yubipath</span><span>="$</span><span style="color:#bf616a;">HOME</span><span style="color:#a3be8c;">/.config/yubikey-touch-detector</span><span>"
</span><span style="color:#bf616a;">mkdir -p </span><span>"$</span><span style="color:#bf616a;">yubipath</span><span>"
</span><span style="color:#96b5b4;">echo </span><span>"</span><span style="color:#a3be8c;">YUBIKEY_TOUCH_DETECTOR_LIBNOTIFY=true</span><span>" > "$</span><span style="color:#bf616a;">yubipath</span><span style="color:#a3be8c;">/service.conf</span><span>"
</span><span style="color:#bf616a;">systemctl --user</span><span> daemon-reload
</span><span style="color:#bf616a;">systemctl --user</span><span> enable yubikey-touch-detector.service</span><span style="color:#bf616a;"> --now
</span></code></pre>
<p>There are some nice features revolving around UNIX socket too, go check it
out. The notification in Gnome looks like the following:</p>
<p><img src="https://peterbabic.dev/blog/gnupg-pin-cache-smartcards-yubikeys-and-notifications/yubikey-is-waiting-for-a-touch-notification.png" alt="YubiKey is waiting for a touch libnotify notification on Gnome" /></p>
<p>And now the best part: the side effect of the above is that not only the
notification is displayed when the YubiKey really waits for a touch, the
notification shows itself even when signing with the OpenPGP smartcard. The
card does obviously not wait for any kind of touch, but it is intrinsic how
the <code>yubikey-touch-detector</code> was created, utilizing <code>gpg --card-status</code>.</p>
<p>This way I could see that something awry is happening with my card if this
suddenly started popping up, even without the touch area. This was
unexpected but I like it as it is. Enjoy!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://dev.gnupg.org/T3362">https://dev.gnupg.org/T3362</a></li>
<li><a href="https://security.stackexchange.com/q/147267/226580">https://security.stackexchange.com/q/147267/226580</a></li>
<li><a href="https://spin.atomicobject.com/2014/02/09/gnupg-openpgp-smartcard/">https://spin.atomicobject.com/2014/02/09/gnupg-openpgp-smartcard/</a></li>
<li><a href="https://stackoverflow.com/q/49107180/1972509">https://stackoverflow.com/q/49107180/1972509</a></li>
<li><a href="https://superuser.com/q/624343/440086">https://superuser.com/q/624343/440086</a></li>
<li><a href="https://unix.stackexchange.com/a/141599/109352">https://unix.stackexchange.com/a/141599/109352</a></li>
<li><a href="https://wiki.debian.org/Smartcards/OpenPGP">https://wiki.debian.org/Smartcards/OpenPGP</a></li>
</ul>
Tips for a rootless Docker on Arch with Ansible2022-01-04T00:00:00+00:002022-01-04T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/tips-for-rootless-docker-on-arch-with-ansible/<p>There are a few pitfalls worth noting when using Ansible when the remote is
an Arch machine. I know, a weird combination. It looks more and more like
basically no-one uses Arch remote with Ansible. But hey, I like it, so
learning (and documenting) a thing or two along the way might not be too
bad. Also note that this post is quite specific for a <code>docker-compose</code>
tool, so if you are not using it, you can safely skip the rest.</p>
<h2 id="getting-started">Getting started</h2>
<p>First get a rootless Docker installed on a machine. It could be done by
Ansible as well, but this is outside of the scope of this article. However,
if you ever need help with that, just ping me (there is an email around the
blog). I'll share my solution, or hopefully a whole post about it will be
out by that time. Right now, you can gain some inspiration by looking up
<a href="/blog/rootless-docker-on-arch/">my previous article</a>.</p>
<p>Next, we'll need two Ansible collection. The first is for
<a href="https://docs.ansible.com/ansible/latest/collections/community/general/pacman_module.html">pacman collection</a>,
referenced as <code>community.general.pacman</code>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">ansible-galaxy</span><span> collection install community.general
</span></code></pre>
<p>The second one is for
<a href="https://docs.ansible.com/ansible/latest/collections/community/docker/docker_compose_module.html">docker-compose collection</a>,
referenced as <code>community.docker.docker_compose</code>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">ansible-galaxy</span><span> collection install community.docker
</span></code></pre>
<p>That should be it. Now let's get dirty.</p>
<h2 id="install-docker-compose-on-the-remote">Install docker-compose on the remote</h2>
<p>Consider the most intuitive approach - install docker-compose and try to
the service, as two Ansible tasks, assuming
<code>path/to/compose/project/docker-compose.yml</code> file exists:</p>
<pre data-lang="yaml" style="background-color:#2b303b;color:#c0c5ce;" class="language-yaml "><code class="language-yaml" data-lang="yaml"><span style="color:#bf616a;">tasks</span><span>:
</span><span> - </span><span style="color:#bf616a;">name</span><span>: </span><span style="color:#a3be8c;">Intall docker-compose
</span><span> </span><span style="color:#bf616a;">become</span><span>: </span><span style="color:#d08770;">yes
</span><span> </span><span style="color:#bf616a;">community.general.pacman</span><span>:
</span><span> </span><span style="color:#bf616a;">state</span><span>: </span><span style="color:#a3be8c;">present
</span><span> </span><span style="color:#bf616a;">name</span><span>:
</span><span> - </span><span style="color:#a3be8c;">docker-compose
</span><span>
</span><span> - </span><span style="color:#bf616a;">name</span><span>: </span><span style="color:#a3be8c;">Create and start the service
</span><span> </span><span style="color:#bf616a;">community.docker.docker_compose</span><span>:
</span><span> </span><span style="color:#bf616a;">project_src</span><span>: </span><span style="color:#a3be8c;">path/to/compose/project
</span></code></pre>
<p>When run as a playbook it fails:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>TASK [Create and start the service] *********************************************************
</span><span>An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ModuleNotFoundError: No module named 'docker'
</span><span>fatal: [X.X.X.X]: FAILED! => {"changed": false, "msg": "Failed to import the required Python library (Docker SDK for Python: docker above 5.0.0 (Python >= 3.6) or docker before 5.0.0 (Python 2.7) or docker-py (Python 2.6)) on vmi732184.contaboserver.net's Python /usr/bin/python3. Please read the module documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter, for example via `pip install docker` (Python >= 3.6) or `pip install docker==4.4.4` (Python 2.7) or `pip install docker-py` (Python 2.6). The error was: No module named 'docker'"}
</span></code></pre>
<p>Yeah, right. The collection
<a href="https://docs.ansible.com/ansible/latest/collections/community/docker/docker_compose_module.html#requirements">requirements</a>
state the <a href="https://pypi.org/project/docker/">docker</a> PyPi package to be
present, or at least <a href="https://pypi.org/project/docker-py/">docker-py</a> for
older Python versions like 2.6.</p>
<h2 id="python-docker-via-pacman">python-docker via pacman</h2>
<p>Alright, the preferred
<a href="https://wiki.archlinux.org/title/Python#Package_management">way to install python packages</a>
on Arch is to prefer <code>pacman</code> over <code>pip</code> when possible. Luckily,
<a href="https://archlinux.org/packages/community/any/python-docker/">python-docker</a>
is available in the community repository. No big deal. Let's add it into
the playlist and re-run it:</p>
<pre data-lang="yaml" style="background-color:#2b303b;color:#c0c5ce;" class="language-yaml "><code class="language-yaml" data-lang="yaml"><span style="color:#bf616a;">tasks</span><span>:
</span><span> - </span><span style="color:#bf616a;">name</span><span>: </span><span style="color:#a3be8c;">Intall docker-compose
</span><span> </span><span style="color:#bf616a;">become</span><span>: </span><span style="color:#d08770;">yes
</span><span> </span><span style="color:#bf616a;">community.general.pacman</span><span>:
</span><span> </span><span style="color:#bf616a;">state</span><span>: </span><span style="color:#a3be8c;">present
</span><span> </span><span style="color:#bf616a;">name</span><span>:
</span><span> - </span><span style="color:#a3be8c;">docker-compose
</span><span> - </span><span style="color:#a3be8c;">python-docker
</span><span>
</span><span> - </span><span style="color:#bf616a;">name</span><span>: </span><span style="color:#a3be8c;">Create and start the service
</span><span> </span><span style="color:#bf616a;">community.docker.docker_compose</span><span>:
</span><span> </span><span style="color:#bf616a;">project_src</span><span>: </span><span style="color:#a3be8c;">path/to/compose/project
</span></code></pre>
<p>It fails again, with something along the lines of:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>"msg": "Unable to load docker-compose. Try `pip install docker-compose`. Error: Traceback (most recent call last):\n File \"/tmp/ansible_community.docker.docker_compose_payload_0elyc4fq/ansible_community.docker.docker_compose_payload.zip/ansible_collections/community/docker/plugins/modules/docker_compose.py\", line 497, in <module>\nModuleNotFoundError: No module named 'compose'\n"
</span></code></pre>
<p>Now, there is
<a href="https://archlinux.org/packages/?sort=&maintainer=&flagged=&q=python%20compose">no package</a>
in the official repositories containing words <code>python</code> and <code>compose</code>.
Following the recommendations above,
<a href="https://aur.archlinux.org/packages/?O=0&SeB=nd&K=py+compose&outdated=&SB=n&SO=a&PP=50&do_Search=Go">there is nothing relevant</a>
in AUR either.</p>
<h2 id="docker-compose-via-pip">docker-compose via pip</h2>
<p>Note that <code>docker-compose</code> Python package pulls its <code>docker</code> package
<a href="https://stackoverflow.com/a/50491825/1972509">as a dependency</a>. So it is
safe to not reference it with pacman in the playbooks here anymore. Since
there is no compose package in repositories, it's time to resort to <code>pip</code>
instead:</p>
<pre data-lang="yaml" style="background-color:#2b303b;color:#c0c5ce;" class="language-yaml "><code class="language-yaml" data-lang="yaml"><span style="color:#bf616a;">tasks</span><span>:
</span><span> - </span><span style="color:#bf616a;">name</span><span>: </span><span style="color:#a3be8c;">Intall docker-compose
</span><span> </span><span style="color:#bf616a;">become</span><span>: </span><span style="color:#d08770;">yes
</span><span> </span><span style="color:#bf616a;">community.general.pacman</span><span>:
</span><span> </span><span style="color:#bf616a;">state</span><span>: </span><span style="color:#a3be8c;">present
</span><span> </span><span style="color:#bf616a;">name</span><span>:
</span><span> - </span><span style="color:#a3be8c;">docker-compose
</span><span> - </span><span style="color:#a3be8c;">python-pip
</span><span>
</span><span> - </span><span style="color:#bf616a;">name</span><span>: </span><span style="color:#a3be8c;">Install pip docker-compose
</span><span> </span><span style="color:#bf616a;">ansible.builtin.pip</span><span>:
</span><span> </span><span style="color:#bf616a;">name</span><span>: </span><span style="color:#a3be8c;">docker-compose
</span><span>
</span><span> - </span><span style="color:#bf616a;">name</span><span>: </span><span style="color:#a3be8c;">Create and start the service
</span><span> </span><span style="color:#bf616a;">community.docker.docker_compose</span><span>:
</span><span> </span><span style="color:#bf616a;">project_src</span><span>: </span><span style="color:#a3be8c;">path/to/compose/project
</span></code></pre>
<p>Now the Python part gets resolved. We can now independently confirm the
<code>docker</code> PyPi package is a dependency of the <code>docker-compose</code> PyPi package:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pip</span><span> show docker-compose | </span><span style="color:#bf616a;">grep</span><span> Requires | </span><span style="color:#bf616a;">cut -d</span><span>' '</span><span style="color:#bf616a;"> -f2- </span><span>| </span><span style="color:#bf616a;">tr</span><span> , '</span><span style="color:#a3be8c;">\n</span><span>'
</span></code></pre>
<p>Resulting in the following with the version <code>1.29.2</code>:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>docker
</span><span>dockerpty
</span><span>texttable
</span><span>jsonschema
</span><span>websocket-client
</span><span>python-dotenv
</span><span>requests
</span><span>docopt
</span><span>PyYAML
</span></code></pre>
<p>The above playbook however fails for the last time, and it was a little
hard for me to pinpoint the issue here. The problem now is Docker.
Truncated error message follows:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>fatal: [X.X.X.X]: FAILED! => {
</span><span> "changed": false,
</span><span> "invocation": {
</span><span> "module_args": {
</span><span> ...
</span><span> "dependencies": true,
</span><span> "docker_host": "unix://var/run/docker.sock",
</span><span> "env_file": null,
</span><span> ...
</span><span> }
</span><span> },
</span><span> "msg": "Error connecting: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))"
</span><span>}
</span></code></pre>
<p>The actual error is a little bit longer but contains mostly traceback. The
problem is with the <code>docker_host</code> attribute. With a Docker rootless, the
actual socket is located in the user space, for instance at
<code>unix:///run/user/1000/docker.sock</code>, specified by either CLI parameter or
the DOCKER_HOST environmental variable, more in the
<a href="https://docs.docker.com/engine/security/rootless/#client">docs</a>.</p>
<h2 id="wrapping-up">Wrapping up</h2>
<p>Most guides recommend exporting the variable somewhere into <code>.profile</code>,
<code>.bashrc</code> or <code>.zshrc</code> like so:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#b48ead;">export </span><span style="color:#bf616a;">DOCKER_HOST</span><span>=</span><span style="color:#a3be8c;">unix://</span><span>$</span><span style="color:#bf616a;">XDG_RUNTIME_DIR</span><span style="color:#a3be8c;">/docker.sock
</span></code></pre>
<p>But, since the
<a href="https://stackoverflow.com/q/35988567/1972509">Ansible opens a non-interactive shell</a>,
this variable exactly as it is exported will not be available to us. We
have to construct it manually and hope no-one had changed it:</p>
<pre data-lang="yaml" style="background-color:#2b303b;color:#c0c5ce;" class="language-yaml "><code class="language-yaml" data-lang="yaml"><span style="color:#bf616a;">tasks</span><span>:
</span><span> - </span><span style="color:#bf616a;">name</span><span>: </span><span style="color:#a3be8c;">Intall docker-compose
</span><span> </span><span style="color:#bf616a;">become</span><span>: </span><span style="color:#d08770;">yes
</span><span> </span><span style="color:#bf616a;">community.general.pacman</span><span>:
</span><span> </span><span style="color:#bf616a;">state</span><span>: </span><span style="color:#a3be8c;">present
</span><span> </span><span style="color:#bf616a;">name</span><span>:
</span><span> - </span><span style="color:#a3be8c;">docker-compose
</span><span> - </span><span style="color:#a3be8c;">python-pip
</span><span>
</span><span> - </span><span style="color:#bf616a;">name</span><span>: </span><span style="color:#a3be8c;">Install pip docker-compose
</span><span> </span><span style="color:#bf616a;">ansible.builtin.pip</span><span>:
</span><span> </span><span style="color:#bf616a;">name</span><span>: </span><span style="color:#a3be8c;">docker-compose
</span><span>
</span><span> - </span><span style="color:#bf616a;">name</span><span>: </span><span style="color:#a3be8c;">Create and start the service
</span><span> </span><span style="color:#bf616a;">community.docker.docker_compose</span><span>:
</span><span> </span><span style="color:#bf616a;">docker_host</span><span>: "</span><span style="color:#a3be8c;">unix://{{ ansible_env.XDG_RUNTIME_DIR }}/docker.sock</span><span>"
</span><span> </span><span style="color:#bf616a;">project_src</span><span>: </span><span style="color:#a3be8c;">path/to/compose/project
</span></code></pre>
<p>Note that for <code>ansible_env</code> to be available, option <code>gather_facts</code> has to
be kept enabled. Mention in the
<a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_environment.html#setting-the-remote-environment">docs</a>.
Enjoy!</p>
Rootless Docker on Arch2022-01-01T00:00:00+00:002022-01-01T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/rootless-docker-on-arch/<p>Currently, using a rootless Docker on Arch can reasonable be divided into
two approaches: stability and performance. With the stability, the choice
is a LTS kernel and a <code>fuse-overlayfs</code> storage driver, while under
performance a latest stable kernel is used alongside the latest widely
adopted <code>overlay2</code> storage driver. Lets see how to set-up both options.</p>
<h2 id="stability-with-lts-kernel-and-fuse">Stability with LTS kernel and FUSE</h2>
<p>Some distributions, namely Contabo, offer quite a nice Arch image for a
VPS. It comes with the <code>linux-lts</code>, which is a sensible choice for a server
setup. At the time of writing the latest LTS kernel version was 5.10,
however the support for a <code>overlay2</code> landed in a 5.11, meaning no support
for this storage driver with an official LTS kernel. This leaves us with a
more time-proven, but possibly less performant
<a href="https://docs.docker.com/storage/storagedriver/overlayfs-driver/">fuse-overlayfs</a>
storage driver.</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">yay -S</span><span> fuse-overlayfs docker-rootless-extras-bin
</span></code></pre>
<p>The above will also pull <code>rootlesskit</code> or <code>rootlesskit-bin</code> into your
system. Now the only thing needed is to follow the
<a href="https://wiki.archlinux.org/title/Docker#Docker_rootless">Arch wiki</a>, in
short:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#96b5b4;">echo </span><span>"$</span><span style="color:#bf616a;">USER</span><span style="color:#a3be8c;">:165536:65536</span><span>" | </span><span style="color:#bf616a;">sudo</span><span> tee /etc/subgid /etc/subgid
</span><span style="color:#bf616a;">systemctl --user</span><span> enable</span><span style="color:#bf616a;"> --now</span><span> docker.socket
</span><span style="color:#96b5b4;">echo </span><span>"</span><span style="color:#a3be8c;">export DOCKER_HOST=unix://</span><span style="color:#96b5b4;">\$</span><span style="color:#a3be8c;">XDG_RUNTIME_DIR/docker.sock</span><span>" >> .profile
</span></code></pre>
<p>Confirm with <code>docker info</code> and look for a <strong>Storage driver</strong>.</p>
<h2 id="performance-with-stable-kernel-and-overlay2">Performance with stable kernel and overlay2</h2>
<p>This is the variation of the above. First we need to switch the latest
stable <code>linux</code> kernel, at the time of writing a 5.15 branch, for instance
like this:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -S</span><span> linux
</span><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -Rnc</span><span> linux-lts
</span><span style="color:#bf616a;">sudo</span><span> mkinitcpio</span><span style="color:#bf616a;"> -p</span><span> linux
</span><span style="color:#bf616a;">sudo</span><span> grub-mkconfig</span><span style="color:#bf616a;"> -o</span><span> /boot/grub/grub.cfg
</span></code></pre>
<p>For the rest, follow the above, only omitting the installation of
<code>fuse-overlayfs</code>. Note that it won't hurt to install it however, as with a
given stable kernel, Docker rootless will choose <code>overlay2</code> automatically.</p>
<h2 id="overriding-the-choice">Overriding the choice</h2>
<p>Docker chooses the best available driver, but the choice
<a href="https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-configuration-file">can be overridden</a>
by editing <code>~/.config/docker/daemon.json</code> with the following:</p>
<pre data-lang="json" style="background-color:#2b303b;color:#c0c5ce;" class="language-json "><code class="language-json" data-lang="json"><span>{
</span><span> "</span><span style="color:#a3be8c;">storage-driver</span><span>": "</span><span style="color:#a3be8c;">overlay2</span><span>"
</span><span>}
</span></code></pre>
<p>The above requires at least kernel 5.11 for a rootless Docker to work, as
was already stated. Or, alternatively, with a stable kernel and a
<code>fuse-overlayfs</code> package present, the FUSE storage driver can be forced
with:</p>
<pre data-lang="json" style="background-color:#2b303b;color:#c0c5ce;" class="language-json "><code class="language-json" data-lang="json"><span>{
</span><span> "</span><span style="color:#a3be8c;">storage-driver</span><span>": "</span><span style="color:#a3be8c;">fuse-overlayfs</span><span>"
</span><span>}
</span></code></pre>
<p>Now rerun the services:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">systemctl --user</span><span> stop docker.service
</span><span style="color:#bf616a;">systemctl --user</span><span> stop docker.socket
</span><span style="color:#bf616a;">systemctl --user</span><span> enable docker.socket</span><span style="color:#bf616a;"> --now
</span><span style="color:#bf616a;">docker</span><span> info
</span></code></pre>
<p><strong>Note:</strong> Although guides prefer to mention a socket for a docker Rootless,
consider enabling <code>docker.service</code> instead of <code>docker.socket</code> for critical
services that should run all the time.</p>
<p>However, on my machine this led to an error similar to
<a href="https://github.com/moby/moby/issues/14088#issuecomment-179158428">this one</a>https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-configuration-file:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>Error starting daemon: error initializing graphdriver: "/home/peterbabic/.local/share/docker" contains other graph drivers:
</span><span>fuse-overlayfs; Please cleanup or explicitly choose storage driver (-s <DRIVER>)
</span></code></pre>
<p>The error can be found in the journal under the following:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">journalctl --user -xeu</span><span> docker.service
</span></code></pre>
<blockquote>
<p><strong>Warning:</strong> the next step might lead to a loss of data! Please proceed
with caution and with a proper backups.</p>
</blockquote>
<p>In case you are just setting things up, the safest way is just to remove
all the Docker rootless data:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">rm -rf ~</span><span>/.local/share/docker
</span></code></pre>
<p>Now rerun the service, described in the previous step. The chosen driver
should be used.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="http://kroah.com/log/blog/2018/08/24/what-stable-kernel-should-i-use/">http://kroah.com/log/blog/2018/08/24/what-stable-kernel-should-i-use/</a></li>
<li><a href="https://bugs.archlinux.org/task/36969">https://bugs.archlinux.org/task/36969</a></li>
<li><a href="https://docs.docker.com/engine/security/rootless/">https://docs.docker.com/engine/security/rootless/</a></li>
<li><a href="https://docs.docker.com/storage/storagedriver/">https://docs.docker.com/storage/storagedriver/</a></li>
<li><a href="https://haydenjames.io/quick-tips-stable-arch-linux-experience/">https://haydenjames.io/quick-tips-stable-arch-linux-experience/</a></li>
<li><a href="https://issueexplorer.com/issue/rootless-containers/rootlesskit/269">https://issueexplorer.com/issue/rootless-containers/rootlesskit/269</a></li>
<li><a href="https://vadosware.io/post/back-to-docker-after-issues-with-podman/">https://vadosware.io/post/back-to-docker-after-issues-with-podman/</a></li>
<li><a href="https://wiki.archlinux.org/title/Docker#Docker_rootless">https://wiki.archlinux.org/title/Docker#Docker_rootless</a></li>
</ul>
OpenPGP Smartcard KDF issue: Bad PIN2021-12-29T00:00:00+00:002021-12-29T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/openpgp-smartcard-kdf-issue-bad-pin/<p>Both YubiKey and GnuPG are able to do many things. The difference one might
point out is, that GnuPG probably does
<a href="https://latacora.micro.blog/2019/07/16/the-pgp-problem.html">neither of them particularly well</a>,
resembling a Swiss army knife. No matter how well either of the tools
handles tasks it is able to perform, both tools had became a de-facto
standard in their respective fields.</p>
<p>Given that the sphere of influence of both these tools overlap on multiple
fronts, and due to the complexity of GnuPG, it is inevitable that there
would be both wrong ways to use them together as well as right ones.</p>
<p>To start on the safer side of the above range, there is a really nice
<a href="https://github.com/drduh/YubiKey-Guide">YubiKey guide</a> covering much of
the use cases a mighty YubiKey offers, dedicating most of the guide's
length to specifically the GnuPG functionality. For YubiKey, the guide
hardly omits anything really important one might encounter.</p>
<p>The story is sadly quite different for the OpenPGP smartcard, a form factor
security token
<a href="/blog/gnupg-security-token-arrived/">that fits into my wallet</a>. In fact,
YubiKey swallowed this smartcard entirely, offering every single piece of
it's functionality as drop-in replacement, while adding a ton other
features on top of it. Ignoring all the other security features, just
<a href="/blog/story-about-nfc-thinkpad-t470/">an addition of the NFC is a plus</a>.
The fact that a YubiKey can do everything a OpenPGP smartcard can is a
reason why is it possible to follow a guide for YubiKey, while working with
a OpenPGP smartcard. Or so I thought.</p>
<h2 id="key-derivation-function">Key Derivation Function</h2>
<p>Before we get further, I have to explain one key concept briefly:
<a href="https://en.wikipedia.org/wiki/Key_derivation_function">Key Derivation Function</a>.
Simplifying, KDF is a function that turns one key into other one in a
reproducible manner. If you ever wondered about where password hashing came
from, KDF is the culprit. There are two other important properties of the
process.</p>
<p>First, reversing the function (obtaining the original key from the derived
one) should be practically impossible. This condition is getting challenged
all the time with newer and better hardware and a sheer amount of data
available. Hence why for instance the usage of MD5 is actively discouraged.</p>
<p>Second important property for a subset of Key Derivation Functions is that
it should take considerable amount of time to provide a result. For a KDF
where such a property is desirable, an input parameter called <em>iterations</em>
is provided. The higher the iterations count, the longer the user has to
wait when using a KDF legitimately (this should still be barely
perceptible), but also the exponentially longer time would an attacker need
to generate a hash tables with this function, thus reducing some attack
surface.</p>
<h2 id="kdf-and-openpgp-standard">KDF and OpenPGP standard</h2>
<p>To understand why is KDF important for a OpenPGP smartcard and a YubiKey at
the same time is that OpenPGP standard defines multiple PIN passwords for
operation, most importantly a normal PIN and an admin PIN. These PINs were
stored in a device as a plaintext up until OpenPGP smartcard
<a href="https://gnupg.org/ftp/specs/OpenPGP-smart-card-application-3.4.pdf">version 3.3</a>
and YubiKey
<a href="https://support.yubico.com/hc/en-us/articles/360016649139-YubiKey-5-2-Enhancements-to-OpenPGP-3-4-Support#h.c5sk4o72r8c6">version 5.2</a>.</p>
<p>Storing a password in a plaintext is not a good idea. Sending a plaintext
password over any communication channel is also not a good idea. The guys
defining an OpenPGP standard understood that and came with a solution, that
is a common practice elsewhere. Simplifying again, instead of a plaintext,
store hashes of the PINs on the device and make the client, in this case a
GnuPG software, calculate the hashed PIN before sending it over to the
device, where it gets compared. The guys decided to call this process by
its scientific name, thus KDF.</p>
<p>There are at least two good reasons to enable this feature on all your
OpenPGP smartcards and YubiKeys when working with GnuPG. Reducing the
possibility of
<a href="https://news.ycombinator.com/item?id=21521110">MITM attacks</a> and making it
even harder for an attacker to do something nasty with your device before
you revoke your certificates.</p>
<h2 id="the-speed-of-an-adoption">The speed of an adoption</h2>
<p>So enabling KDF seems like a no-brainer. As is usually the case with any
software whatsoever, more features lead to more bugs. This case is no
different. There are many mentions that enabling not only solved some
problems, but created a few others. Some
<a href="https://github.com/Yubico/yubikey-manager/issues/279">got resolved and adopted</a>
in time.</p>
<p>Other problems [got resolved] but did not get adopted widely, as is the
case of the GnuPG 2.3 branch, which is marked as a development branch,
paving a way for a stable 2.4 branch. Due to changes introduced in GnuPG
2.3, which I won't go into at this time, many distributions stick with the
2.2 branch.</p>
<p>Debian's gnupg package
<a href="https://packages.debian.org/bullseye/gnupg">currently sits at 2.2.27</a>.
Arch went
<a href="https://lists.archlinux.org/pipermail/arch-dev-public/2021-May/030431.html">with 2.3.1 briefly but then rolled back</a>.
Artix community
<a href="https://forum.artixlinux.org/index.php/topic,2578.0.html">noticed and got confused</a>
in the process.</p>
<h2 id="openpgp-smartcard-and-gnupg-2-2">OpenPGP smartcard and GnuPG 2.2</h2>
<p>I am still not sure how come that enabling KDF as a very first step right
after a factory reset works on a YubiKey even with a GnuPG as low as 2.2.27
and at the same time, it fails on GnuPG smartcard 3.4 with a following
error:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>gpg/card> passwd
</span><span>gpg: OpenPGP card no. XXXX detected
</span><span>
</span><span>1 - change PIN
</span><span>2 - unblock PIN
</span><span>3 - change Admin PIN
</span><span>4 - set the Reset Code
</span><span>Q - quit
</span><span>
</span><span>Your selection? 1
</span><span>Error changing the PIN: Bad PIN
</span></code></pre>
<p>Choosing a selection 3 to change the admin pin fails with the same message:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>Error changing the PIN: Bad PIN
</span></code></pre>
<p>The reason for this is that while the KDF gets enabled with the <code>kdf-setup</code>
admin command, the passwords do not get rehashed and so the comparison is
broken as now the hash is compared with the plaintext, or something along
the lines. The only way out of this I found is a smartcard factory reset.</p>
<h2 id="kdf-with-gnupg-2-3-1">KDF with GnuPG 2.3.1</h2>
<p>The good part is that the above problem is
<a href="https://dev.gnupg.org/T3891#142195">resolved in the GnuPG 2.3.0</a>. The bad
part is, as already stated, this version is not really too available from
the official repositories on the distributions I readily interact with.</p>
<p>Another good bit is, that once the KDF is enabled on the GnuPG smartcard
with the GnuPG 2.3.1 (maybe even with 2.3.0, but I did not test), it can
interact with GnuPG 2.2.27 and higher without a Bad PIN problem.</p>
<p>So the only thing needed is get GnuPG 2.3.1 running and enabling KDF on the
GnuPG with that. I found a surefire way to do this on Arch by building and
installing a GnuPG 2.3.1 package
<a href="https://github.com/archlinux/svntogit-packages/tree/fbad9a76ed1900cb739ba8613ddd3a893585db73/trunk">from this commit</a>,
as it even was a one point in the official repositories, before being
pulled down as described earlier.</p>
<p>In case there is a conflict with <code>gpgme</code> which Depends on <code>gnupg>=2</code>, you
can modify the PKGBUILD as follows:</p>
<pre data-lang="diff" style="background-color:#2b303b;color:#c0c5ce;" class="language-diff "><code class="language-diff" data-lang="diff"><span style="color:#bf616a;">- provides=(${pkgname%-git})
</span><span style="color:#a3be8c;">+ provides=(${pkgname%-git}=2)
</span></code></pre>
<p>Worked for me.</p>
<h2 id="running-gnupg-2-3-1-on-arch">Running GnuPG 2.3.1 on Arch</h2>
<p>With the GnuPG 2.3.1 (or newer) installed, proceed by editing card
settings:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">gpg --edit-card
</span></code></pre>
<p>However, you should be greeted with the following:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>gpg: WARNING: server 'gpg-agent' is older than us (2.2.32 < 2.3.1)
</span><span>gpg: Note: Outdated servers may lack important security fixes.
</span><span>gpg: Note: Use the command "gpgconf --kill all" to restart them.
</span><span>gpg: WARNING: server 'scdaemon' is older than us (2.2.32 < 2.3.1)
</span><span>gpg: Note: Outdated servers may lack important security fixes.
</span><span>gpg: Note: Use the command "gpgconf --kill all" to restart them.
</span></code></pre>
<p>This is the reason why I believe it is not trivial to use GnuPG version
that is not readily available from the repositories, unless you really know
what you are doing. If the only thing needed was updated version of <code>gpg</code>,
things would be only marginally harder than running <code>make</code> followed
<code>./bin/gpg --edit-card</code>. To make it all really work, we need to also update
<code>scdaemon</code> and, obviously an <code>gpg-agent</code>. Other distributions than Arch
might probably get by
<a href="https://git.gnupg.org/cgi-bin/gitweb.cgi?p=gnupg.git">by building gnupg from source</a>
and adjusting <code>./configure</code> parameters before running <code>make install</code>.</p>
<p>However, with the Arch package installed, the required executable files are
already in place, they only need to be loaded into memory and the greeting
above shows exactly how to do that:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">gpgconf --kill</span><span> all
</span></code></pre>
<p>Sadly, trying to edit the card now, we are out of luck:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>gpg: selecting card failed: No such device
</span><span>gpg: OpenPGP card not available: No such device
</span><span>
</span><span>gpg/card>
</span></code></pre>
<p>The solution is
<a href="https://support.yubico.com/hc/en-us/articles/360013714479-Troubleshooting-Issues-with-GPG">officially documented</a>
and
<a href="https://www.unixtutorial.org/yubikey-not-working-with-gnupg-2-3/">confirmed elsewhere</a>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#96b5b4;">echo </span><span>"</span><span style="color:#a3be8c;">disable-ccid</span><span>" >> </span><span style="color:#bf616a;">~</span><span>/.gnupg/scdaemon.conf
</span></code></pre>
<p>The rest is easy. <strong>Enable KDF on the smartcard</strong> and confirm by changing
PIN. Changing PIN should work now. Restore the <code>scdaemon.conf</code> file and
roll back the official GnuPG version to not mess up your package manager
package verification process:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -S</span><span> gnupg
</span></code></pre>
<p>You can now follow the
<a href="https://github.com/drduh/YubiKey-Guide">YubiKey guide</a> even with a GnuPG
2.2 branch exactly as it is, only skipping the <code>kdf-setup</code> part, as KDF is
already properly activated on the smartcard. Enjoy!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://github.com/drduh/YubiKey-Guide">https://github.com/drduh/YubiKey-Guide</a></li>
<li><a href="https://en.wikipedia.org/wiki/Key_derivation_function">https://en.wikipedia.org/wiki/Key_derivation_function</a></li>
<li><a href="https://latacora.micro.blog/2019/07/16/the-pgp-problem.html">https://latacora.micro.blog/2019/07/16/the-pgp-problem.html</a></li>
<li><a href="https://dev.gnupg.org/source/gnupg/browse/master/NEWS">https://dev.gnupg.org/source/gnupg/browse/master/NEWS</a></li>
<li><a href="https://dev.gnupg.org/T3823">https://dev.gnupg.org/T3823</a></li>
</ul>
Merge repos using git-filter-repo2021-12-21T00:00:00+00:002021-12-21T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/merge-repos-using-git-filter-repo/<p>I have already written how
<a href="/blog/clever-uses-for-git-filter-repo/">useful a tool git filter-repo is</a>
for cleaning repositories. I made some extensive use of the newfound
knowledge since to undo some previous bad decisions in my private
repositories.</p>
<p>Here's a list of commands for merging <code>project-a</code> into <code>project-b</code> for a
reference.</p>
<h2 id="optional-revert-git-lfs">Optional: Revert git LFS</h2>
<p>Depending on the complexity of the project, you might <em>optionally</em> consider
reverting the LFS status of the files in the <code>project-a</code> repository:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> lfs migrate export</span><span style="color:#bf616a;"> --include</span><span>="</span><span style="color:#a3be8c;">*</span><span>"</span><span style="color:#bf616a;"> --everything
</span></code></pre>
<p>Check if there are really no files in with the Large File Storage (LFS)
status:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> lfs ls-files
</span></code></pre>
<p>It might be even safe now to remove any mention of the <code>.gitattribute</code>
file, which is used to store information about which files should LFS
track:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> filter-repo</span><span style="color:#bf616a;"> --path</span><span> .gitattributes</span><span style="color:#bf616a;"> --invert-paths
</span></code></pre>
<p>You can migrate import LFS files back later.</p>
<h2 id="preparation-directory-structure">Preparation: Directory structure</h2>
<p>Start by moving a <code>project-a</code> one level deeper in the directory structure
while preserving git history using a
<a href="https://htmlpreview.github.io/?https://github.com/newren/git-filter-repo/blob/docs/html/git-filter-repo.html#_path_shortcuts">path shortcut</a>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#96b5b4;">cd</span><span> path/to/project-a
</span><span style="color:#bf616a;">git-filter-repo --to-subdirectory-filter </span><span>$(</span><span style="color:#bf616a;">basename </span><span>"$</span><span style="color:#bf616a;">PWD</span><span>")
</span></code></pre>
<p>Confirm by running <code>ls</code>. Only <code>project-a</code> directory should pop up.</p>
<h2 id="merge">Merge</h2>
<p>Now the <code>project-a</code> is ready to be integrated into the <code>project-b</code>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#96b5b4;">cd</span><span> /path/to/project-b
</span><span style="color:#bf616a;">git</span><span> remote add project-a /path/to/project-a
</span><span style="color:#bf616a;">git</span><span> fetch project-a</span><span style="color:#bf616a;"> --tags
</span><span style="color:#bf616a;">git</span><span> merge</span><span style="color:#bf616a;"> --allow-unrelated-histories</span><span> project-a/master
</span><span style="color:#bf616a;">git</span><span> remote remove project-a
</span></code></pre>
<p>After confirming a merge commit, the <code>project-a</code> directory should now be
contained inside a <code>project-b</code>. To get rid of the merge commit, rebase
interactively:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> rebase</span><span style="color:#bf616a;"> --interactive</span><span> HEAD</span><span style="color:#bf616a;">~
</span></code></pre>
<p>Confirming a git rebase dialog without editing anything (every line should
start with a <code>pick</code> keyword) should be sufficient.</p>
<h2 id="optional-re-sign-every-commit">Optional: Re-sign every commit</h2>
<p>If you want to publish the cleaned repository publicly, it might be
worthwhile to add your GPG signature to the commits, as changing history
with a rebase or directly via a <code>git-filter-repo</code> tool can mess with
signatures.</p>
<blockquote>
<p><strong>Note:</strong> before proceeding with this step, make sure that you are the
sole contributor to the repository. Also, if the repository is very old,
the email address associated with the commit might need updating, for
instance using with <code>git-filter-repo --use-mailmap</code> command.</p>
</blockquote>
<p>I have written about
<a href="/blog/git-sign-previous-commits-keeping-dates/">re-signing previous commits</a>,
so check it out. In short:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> rebase</span><span style="color:#bf616a;"> --exec </span><span>'</span><span style="color:#a3be8c;">git commit --amend --no-edit --no-verify -S</span><span>'</span><span style="color:#bf616a;"> -i --root
</span><span style="color:#bf616a;">git</span><span> rebase</span><span style="color:#bf616a;"> --committer-date-is-author-date -i --root
</span></code></pre>
<p>The merged repository should be ready to be pushed to it's shiny place!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://stackoverflow.com/a/10548919/1972509">https://stackoverflow.com/a/10548919/1972509</a></li>
<li><a href="https://stackoverflow.com/a/63686947/1972509">https://stackoverflow.com/a/63686947/1972509</a></li>
<li><a href="https://stackoverflow.com/a/61450995/1972509">https://stackoverflow.com/a/61450995/1972509</a></li>
<li><a href="https://github.com/git-lfs/git-lfs/blob/main/docs/man/git-lfs-migrate.1.ronn#export">https://github.com/git-lfs/git-lfs/blob/main/docs/man/git-lfs-migrate.1.ronn#export</a></li>
</ul>
List executable files with fzf2021-12-06T00:00:00+00:002021-12-06T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/list-executable-files-with-fzf/<p>Sometimes I find myself in the need to take a look at the executable files
available in the current project directory, choose a suitable one and run
it, potentially specifying some parameters. Unfortunately, package managers
did not agree on a path, where the executable files should reside. Two
examples that usually concern me are:</p>
<ol>
<li>PHP's <code>composer</code> puts them in <code>vendor/bin/</code> directory</li>
<li>Node's <code>npm</code> puts them into <code>node_modules/.bin</code> directory</li>
</ol>
<p>Other languages like Go or Rust have their own package managers that might
use different path still. To make the matters worse, node package manager
uses a hidden folder (starting with a dot). Not to mention that both
folders do not host the executable files, but instead use symbolic links to
them. Using symbolic links is absolutely correct, it just complicates
things a little bit further.</p>
<p>Note that one might argue that in case of <code>npm</code> the reason to make
<code>node_modules/.bin/</code> hidden is justified, because ask yourself when was the
last time you typed full path to node package binary manually? We have
<code>npx</code> for that. But <code>npx</code> can also run binaries from the packages that are
not installed in the local project, I would like to see them.</p>
<h2 id="zsh-alias">Zsh alias</h2>
<p>Here's how I quickly solved the issue with an alias:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#8fa1b3;">fx</span><span>() { </span><span style="color:#bf616a;">print -z </span><span>$(</span><span style="color:#bf616a;">fd -HI -tl </span><span>| </span><span style="color:#bf616a;">fzf</span><span>) }
</span></code></pre>
<p>See it in action below, tested for <code>npm</code> and <code>composer</code> as stated above:</p>
<p><img src="https://peterbabic.dev/blog/list-executable-files-with-fzf/fzf-in-action.gif" alt="Using fzf to quickly list executable files, a short screencast" /></p>
<p>The alias required <code>fd</code>, <code>fzf</code> and <code>zsh</code> to work. It is also possible to
tweak it a little to actually list all executable files with an additional
<code>-tx</code> parameter:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#8fa1b3;">fx</span><span>() { </span><span style="color:#bf616a;">print -z </span><span>$(</span><span style="color:#bf616a;">fd -HI -tl -tx </span><span>| </span><span style="color:#bf616a;">fzf</span><span>) }
</span></code></pre>
<p>But I prefer only having symlinks listed with <code>-tl</code>, because for some
reason that currently eludes me, packages supplied with package managers in
question tend to contain files that have an executable flag on all sort of
files, even <code>README.md</code>. Because of this, the output including actual
executable flags is really cluttered and provides little to no practical
value. Listing only symlinks, as surprising as that may be, works much
better for listing executable files. Enjoy!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://stackoverflow.com/a/68695427/1972509">https://stackoverflow.com/a/68695427/1972509</a></li>
<li><a href="https://zsh.sourceforge.io/Doc/Release/Shell-Builtin-Commands.html">https://zsh.sourceforge.io/Doc/Release/Shell-Builtin-Commands.html</a></li>
</ul>
Git sign previous commits keeping dates2021-11-28T00:00:00+00:002021-11-28T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/git-sign-previous-commits-keeping-dates/<p>Sometimes you might need to re-sign your previous commits using GnuPG. This
process rewrites the git history in a sense of changing commit hashes. What
is more, it also changes the date when commit was made. If not done
properly, the repository looks like if there was no history at all after
signing all the previous work, as all the commits would look as if they
were added in the same instance. You cannot even create such a scenario by
hand, so it looks very unnatural.</p>
<blockquote>
<p><strong>Warning:</strong> Before proceeding make absolutely sure you have multiple
backups of your work. Failing to do so may lead to loss of data.</p>
</blockquote>
<p>Consider the following command usually used to sign previous commits:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> rebase</span><span style="color:#bf616a;"> --exec </span><span>'</span><span style="color:#a3be8c;">git commit --amend --no-edit --no-verify -S</span><span>'</span><span style="color:#bf616a;"> -i --root
</span></code></pre>
<p>This creates the following interactive rebase window (truncated):</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>pick 60dcc7e insert posts sources
</span><span>exec git commit --amend --no-edit --no-verify -S
</span><span>pick 989ea13 fix sources links
</span><span>exec git commit --amend --no-edit --no-verify -S
</span><span>pick ce0cab2 move gcal push hook from gists to sources
</span><span>exec git commit --amend --no-edit --no-verify -S
</span><span>...
</span></code></pre>
<p>After saving and closing, the process of rebasing takes a few seconds. The
result can be checked like this:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> log</span><span style="color:#bf616a;"> --pretty</span><span>=fuller
</span></code></pre>
<p>We can find out that the <code>AuthorDate</code> property was left untouched but the
<code>CommitDate</code> is adjusted to the point when the command is being run:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>commit 7c378e312bb6ab0f8265707be39f343cf04f6d25 (HEAD -> master)
</span><span>Author: Peter Babič <peter@peterbabic.dev>
</span><span>AuthorDate: Thu Nov 25 19:18:03 2021 +0100
</span><span>Commit: Peter Babič <peter@peterbabic.dev>
</span><span>CommitDate: Sun Nov 28 20:16:56 2021 +0100
</span><span>
</span><span> insert links
</span><span>
</span><span>commit f32a1fc98c829c6ee8bb8dbcda46345f2c00dedd
</span><span>Author: Peter Babič <peter@peterbabic.dev>
</span><span>AuthorDate: Wed Jul 28 21:57:02 2021 +0200
</span><span>Commit: Peter Babič <peter@peterbabic.dev>
</span><span>CommitDate: Sun Nov 28 20:16:54 2021 +0100
</span><span>
</span><span> insert post cypress husky stop-only
</span><span>
</span><span>commit 86c263cb83bb629c5f9dd20d81808a21c83f1ff8
</span><span>Author: Peter Babič <peter@peterbabic.dev>
</span><span>AuthorDate: Thu Jul 15 19:56:13 2021 +0200
</span><span>Commit: Peter Babič <peter@peterbabic.dev>
</span><span>CommitDate: Sun Nov 28 20:16:52 2021 +0100
</span><span>
</span><span> update source to show differences
</span><span>
</span><span>...
</span></code></pre>
<p>Because I thought there would be a single command to work around this, I
have had some hard times. When I screwed up, I could fortunately always go
back and start over by looking at the entry at the very bottom of
<code>git reflog</code>, noting it's number and resetting the repository state via:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> reset</span><span style="color:#bf616a;"> --hard</span><span> HEAD@{xx}
</span></code></pre>
<p>The trial and error went for some time as I tried multiple different
commands, until I found this
<a href="https://stackoverflow.com/a/66254615/1972509">little unappreciated gem</a>.
This is the single place I could found that shows the process actually
requires two rebase commands that are fired one after the other, with the
second one added here:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> rebase</span><span style="color:#bf616a;"> --exec </span><span>'</span><span style="color:#a3be8c;">git commit --amend --no-edit --no-verify -S</span><span>'</span><span style="color:#bf616a;"> -i --root
</span><span style="color:#bf616a;">git</span><span> rebase</span><span style="color:#bf616a;"> --committer-date-is-author-date -i --root
</span></code></pre>
<p>The above command fixes the <code>CommitDate</code> to match the <code>AuthorDate</code> after
the first rebase, check again yourself:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> log</span><span style="color:#bf616a;"> --pretty</span><span>=fuller
</span></code></pre>
<p>Now the <code>AuthorDate</code> and <code>CommitDate</code> match perfectly:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>commit ee8972f2ca725cfe297417f5cd5ed76816b4b0bc (HEAD -> master, origin/master)
</span><span>Author: Peter Babič <peter@peterbabic.dev>
</span><span>AuthorDate: Thu Nov 25 19:18:03 2021 +0100
</span><span>Commit: Peter Babič <peter@peterbabic.dev>
</span><span>CommitDate: Thu Nov 25 19:18:03 2021 +0100
</span><span>
</span><span> insert links
</span><span>
</span><span>commit 583d8b7a934f10faa6ee574180192e4e2b98e706
</span><span>Author: Peter Babič <peter@peterbabic.dev>
</span><span>AuthorDate: Wed Jul 28 21:57:02 2021 +0200
</span><span>Commit: Peter Babič <peter@peterbabic.dev>
</span><span>CommitDate: Wed Jul 28 21:57:02 2021 +0200
</span><span>
</span><span> insert post cypress husky stop-only
</span><span>
</span><span>commit 0760e7be4286f6b607243b1c712828d0161311e0
</span><span>Author: Peter Babič <peter@peterbabic.dev>
</span><span>AuthorDate: Thu Jul 15 19:56:13 2021 +0200
</span><span>Commit: Peter Babič <peter@peterbabic.dev>
</span><span>CommitDate: Thu Jul 15 19:56:13 2021 +0200
</span><span>
</span><span> update source to show differences
</span><span>
</span><span>...
</span></code></pre>
<p>There might be a way to do this in a single rebase, not requiring two
separate ones, but I was not able to find a way to achieve this. Anyway,
one command or two, it does not matter as long as the job get's done,
unless you do this for thousands of commits, because the process is a
little time-consuming. Keep in mind that rewriting thousands of commits
would be suspicious at the very least. There is still a trace left after
this activity:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> log</span><span style="color:#bf616a;"> --show-signature
</span></code></pre>
<p>Which produces a slightly different output:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>commit 819bd4412b97f295dac5c958b18464d38accb94b (HEAD -> master)
</span><span>gpg: Signature made Sun 28 Nov 2021 20:26:29 CET
</span><span>gpg: using RSA key 0575E586224E3E33CF1A3EE34BB075BC1884BA40
</span><span>gpg: Good signature from "Peter Babic <peter@peterbabic.dev>" [ultimate]
</span><span>Primary key fingerprint: A44B 03E6 42BB 4223 6780 FEA4 3A13 81FC F273 8E75
</span><span> Subkey fingerprint: 0575 E586 224E 3E33 CF1A 3EE3 4BB0 75BC 1884 BA40
</span><span>Author: Peter Babič <peter@peterbabic.dev>
</span><span>Date: Thu Nov 25 19:18:03 2021 +0100
</span><span>
</span><span> insert links
</span><span>
</span><span>commit 24634bdeafda0cbddb7c3e7dec368762a3e790b5
</span><span>gpg: Signature made Sun 28 Nov 2021 20:26:28 CET
</span><span>gpg: using RSA key 0575E586224E3E33CF1A3EE34BB075BC1884BA40
</span><span>gpg: Good signature from "Peter Babic <peter@peterbabic.dev>" [ultimate]
</span><span>Primary key fingerprint: A44B 03E6 42BB 4223 6780 FEA4 3A13 81FC F273 8E75
</span><span> Subkey fingerprint: 0575 E586 224E 3E33 CF1A 3EE3 4BB0 75BC 1884 BA40
</span><span>Author: Peter Babič <peter@peterbabic.dev>
</span><span>Date: Wed Jul 28 21:57:02 2021 +0200
</span><span>
</span><span> insert post cypress husky stop-only
</span><span>
</span><span>commit 86a49348e4a870f0fc4e7894d9766dc72acb1482
</span><span>gpg: Signature made Sun 28 Nov 2021 20:26:27 CET
</span><span>gpg: using RSA key 0575E586224E3E33CF1A3EE34BB075BC1884BA40
</span><span>gpg: Good signature from "Peter Babic <peter@peterbabic.dev>" [ultimate]
</span><span>Primary key fingerprint: A44B 03E6 42BB 4223 6780 FEA4 3A13 81FC F273 8E75
</span><span> Subkey fingerprint: 0575 E586 224E 3E33 CF1A 3EE3 4BB0 75BC 1884 BA40
</span><span>Author: Peter Babič <peter@peterbabic.dev>
</span><span>Date: Thu Jul 15 19:56:13 2021 +0200
</span><span>
</span><span> update source to show differences
</span><span>
</span><span>...
</span></code></pre>
<p>The above shows the <code>Date</code> properly, however <code>gpg: Signature made</code> is saved
the instant the rebasing is done. I did not try to work around this,
because I was pretty happy with the result as is. Thinking about it, it
might not even make sense to have something signed with a key from the
"future". Enjoy!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://git-scm.com/docs/git-rebase#Documentation/git-rebase.txt---execltcmdgt">git rebase --exec</a></li>
<li><a href="https://git-scm.com/docs/git-rebase#Documentation/git-rebase.txt---committer-date-is-author-date">git rebase --committer-date-is-author-date</a></li>
<li><a href="https://rushlow.dev/blog/oops-i-forgot-to-sign-my-commit-from-last-monday">https://rushlow.dev/blog/oops-i-forgot-to-sign-my-commit-from-last-monday</a></li>
</ul>
Clever uses for git-filter-repo2021-11-26T00:00:00+00:002021-11-26T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/clever-uses-for-git-filter-repo/<p>There is a common saying around version control systems stating the
following:</p>
<blockquote>
<p>Do not rewrite the history.</p>
</blockquote>
<p>And it is pretty solid saying to be fair, supported at many threads, for
instance at <a href="https://bugs.archlinux.org/task/45425">FS#45425</a> or elsewhere.
You simply have to assume that once you pushed something into the public,
it should stay as it is. But there are some times when cleaning the mess is
not just required, but may play out well in the long term.</p>
<p>Rewriting history will almost definitely not be possible for some public
projects with commits added on top of yours. But for a very freshly created
repositories somewhere in the forgotten parts of the Internet, the leap of
faith might be worth taking. For not-pushed work, it is almost always safe
and very encouraged to do the cleaning, so knowing the efficient tools to
get the job done is essential. Also, even better is to know the tool that
gets the job done <em>while</em> not placing the user at the risk of unrecoverable
damage in the form of mangled history.</p>
<h3 id="getting-started">Getting started</h3>
<p>There is one more saying that is especially relevant in this context, which
states:</p>
<blockquote>
<p>Always keep multiple backups.</p>
</blockquote>
<p>This saying cuts even deeper. Nothing is 100% reliable. Before continuing,
back up your work. Some software has a proven history to be battle tested,
usually meaning that the edge cases were polished to the point they are not
visible, but you can bet on the fact that Murphy will always get you. You
have been warned. The tool we take a look at is
<a href="https://github.com/newren/git-filter-repo">newren/git-filter-repo</a>.</p>
<p><strong>Beware:</strong> Using the tool can lead to catastrophic scenarios if used
incorrectly.</p>
<p>The tool is encouraged to be used only on the fresh clones to make sure the
work is recoverable in case of a disaster. Try to avoid using the <code>--force</code>
parameter at all costs to prevent data loss.</p>
<p>If unsure, instead use <code>--dry-run</code> or <code>--analyze</code> along with the actual
command to inspect the changes before doing them. Now lets look at some of
the use cases of the tool.</p>
<h3 id="replace-sensitive-string-in-all-files">Replace sensitive string in all files</h3>
<p>The most common use case for rewriting git history is probably removing
sensitive information such as passwords or access tokens checked in by
accident. It is not enough to just replace all occurrences in the current
index, because the information might still be present in earlier commits.
Doing this manually via interactive rebase is time-consuming and error
prone. Instead, this command can be used:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> filter-repo</span><span style="color:#bf616a;"> --replace-text </span><span><(</span><span style="color:#96b5b4;">echo </span><span>'</span><span style="color:#a3be8c;">my_password==>xxxxxxxx</span><span>')
</span></code></pre>
<p>The reason for the <code><( ... )</code> syntax denoting an
<a href="https://tldp.org/LDP/abs/html/io-redirection.html">I/O redirection</a> is
that the <code>--replace-text</code> argument originally requires a file descriptor
with as many key-value pairs as needed. With the above syntax, one can skip
creating a file altogether. Useful when only a single replace is needed.</p>
<p>This is where usually the use case for the <code>filter-repo</code> tool ends. It is
also quite hard to remember due to used shell intricacies and the uncommon
syntax requiring a long double-arrow symbol <code>==></code>, so you probably end up
searching this up every time the need arises. But there is much more one
can do, so lets look at some less documented features I found scattered
around the internet.</p>
<h3 id="remove-a-single-folder-keeping-history">Remove a single folder, keeping history</h3>
<p>Scenario, where a repository has a folder folder that has to be taken out
of it, leaving no traces in history:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git-filter-repo --path</span><span> path_to_the_folder/</span><span style="color:#bf616a;"> --invert-paths
</span></code></pre>
<p>Now the repository has no trace of the <em>tracked files</em> inside
<code>./path_to_the_folder/</code>. Beware that all the <em>untracked</em> files are
preserved while <em>tracked</em> files are completely delete. If all the files in
the folder are tracked, then the empty folder will be deleted as well.</p>
<h3 id="extract-a-single-folder-keeping-history">Extract a single folder, keeping history</h3>
<p>The opposite is even simpler with one less parameter. When you want to
extract commit history of a single folder, omitting every other file:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git-filter-repo --path</span><span> path_to_the_folder/
</span></code></pre>
<p>The repository now contains only the <code>./path_to_the_folder/</code> and all other
files that are untracked.</p>
<h3 id="move-everything-from-sub-folder-one-level-up">Move everything from sub-folder one level up</h3>
<p>This goes very well together with the above command. After extraction,
sometimes you need to make the contents of the extracted folder the root of
the repository, shifting everything one level up in the path:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git-filter-repo --path-rename</span><span> path_to_the_folder/:
</span></code></pre>
<p>Note the colon character <code>:</code> at the end. The repository now no longer
contains <code>./path_to_the_folder/</code> and instead you will look at contents on
that folder directly.</p>
<h3 id="replace-email-address-in-commits">Replace email address in commits</h3>
<p>This is a little bit different from the above commands, but sometimes you
made commits with a wrong email address. This can be fixed by creating a
file named <code>.mailmap</code> in the desired repository with the following
contents:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span><new@email> <curent@email>
</span></code></pre>
<p>Note that angle brackets <code><</code> and <code>></code> around both email addresses are
mandatory, otherwise the following error happens:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>Unparseable mailmap file: line #1 is bad: ...
</span></code></pre>
<p>With the properly formatted <code>.mailmap</code> file in place, issue the rewrite
command:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git-filter-repo --use-mailmap
</span></code></pre>
<p>Even though changing email address in commits seems like an innocent
change, it too changes commit SHA hashes, as they are computed with the
authors email address in mind.</p>
<h3 id="replace-author-s-name-in-commits">Replace author's name in commits</h3>
<p>A variation of the above is to replace the author's name. I have not used
this personally, but I can think of a situation of using a nickname for
commits you just want to make public or the opposite scenario, where you
made commits with your true identity, but you want to show off just using a
nickname. All the steps are identical, with a tweak to the <code>.mailmap</code> file:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>Name Surname <current@email> <current@email>
</span></code></pre>
<p>And again, run the following:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git-filter-repo --use-mailmap
</span></code></pre>
<p>You can obviously also combine changing both the author and the email in
the same step:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>Name Surname <new@email> <current@email>
</span></code></pre>
<p>Note that the author's name will be only changed for the commits that match
<code>current@email</code> so this is something to keep in mind!</p>
<h2 id="checking-the-changes">Checking the changes</h2>
<p>After you've done your changes, it is always safe to check if everything
went right. One way of doing so is to use the git inspection GUI to inspect
all branches:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>gitk --all
</span></code></pre>
<p>If GUI is not available, this command could serve as a base for the
endeavor:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> log</span><span style="color:#bf616a;"> --graph --all --format</span><span>='</span><span style="color:#a3be8c;">%h %an <%ae></span><span>'
</span></code></pre>
<p>Tweak the above if needed.</p>
<h2 id="conclusion">Conclusion</h2>
<p>The <code>git-filter-repo</code> is a very versatile tool that can do many actions
with just one line. It is the official preferred way of rewriting git
history. Most of the time you find yourself using it for removing sensitive
information such as passwords, but most other actions needed for a
repository clean-up are possible, when you know the right syntax. Remember
to keep the backups, do not rewrite public repositories unless absolutely
necessary and keep your repositories clean. Enjoy!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://github.com/newren/git-filter-repo#readme">https://github.com/newren/git-filter-repo#readme</a></li>
<li><a href="https://stackoverflow.com/a/64153992/1972509">https://stackoverflow.com/a/64153992/1972509</a></li>
<li><a href="https://stackoverflow.com/a/58263677/1972509">https://stackoverflow.com/a/58263677/1972509</a></li>
<li><a href="https://stackoverflow.com/a/1441062/1972509">https://stackoverflow.com/a/1441062/1972509</a></li>
<li><a href="https://stackoverflow.com/a/65069775/1972509">https://stackoverflow.com/a/65069775/1972509</a></li>
<li><a href="https://stackoverflow.com/a/61544937/1972509">https://stackoverflow.com/a/61544937/1972509</a></li>
</ul>
Installing caffe SSD on Arch2021-11-23T00:00:00+00:002021-11-23T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/installing-caffe-ssd-on-arch/<p>Exploring artificial intelligence possibilities in late 2021 led me through
multiple hoops already. There are things that work already, there are
things that can be
<a href="/blog/install-tfjs-node-from-source/">optimized for a better performance</a>
and then there are things that do not appear to work at all.</p>
<p>One of such things is installing
<a href="https://caffe.berkeleyvision.org/">caffe</a>. It is not your average daily
<em>coffee</em>, and not even a hashtag #cofe you might find floating around
social media. No, this CAFFE is Convolutional Architecture for Fast Feature
Embedding. The current definition on the home page states the following:</p>
<blockquote>
<p>Caffe is a deep learning framework made with expression, speed, and
modularity in mind. It is developed by Berkeley AI Research (BAIR) and by
community contributors. Yangqing Jia created the project during his PhD
at UC Berkeley. Caffe is released under the BSD 2-Clause license.</p>
</blockquote>
<p>So it is a community maintained open-source project and quite well accepted
by the related world, as it brings some unique advantages to deep learning,
not necessarily present in the other contenders.</p>
<h2 id="aur-and-caffe">AUR and caffe</h2>
<p>The official GitHub repository is
<a href="https://github.com/BVLC/caffe">BVLC/caffe</a> but at the time of writing, the
actual last code commit to the master branch was
<a href="https://github.com/BVLC/caffe/commit/99bd99795dcdf0b1d3086a8d67ab1782a8a08383">#99bd9979</a>
on 21 Aug 2018.</p>
<p>The caffe from this repository is available in
<a href="https://aur.archlinux.org/packages/caffe/">aur/caffe</a> and
<a href="https://aur.archlinux.org/packages/caffe-git/">aur/caffe-git</a>. Their
PKGBUILDs have absolutely no difference. Trust me, checked thoroughly. You
can do the diff yourself by comparing PKGBUILDs of
<a href="https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=caffe&id=fcbcad4fb4f52b2269606f33b25842fdd24060ef">caffe</a>
and
<a href="https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=caffe-git&id=a24ece35eae2340e5b825f8f412a2cf3285b595d">caffe-git</a>
yourself (links are to snapshots from the time of writing).</p>
<p>The only difference there is the source commit, obviously, as is the norm
with AUR. In case you are not familiar with the conventions there, it goes
like this: <code>caffe</code> package from AUR is tied to a specific release. At the
time of writing, it is
<a href="https://github.com/BVLC/caffe/releases/tag/1.0">1.0</a> as the source blob is
released along with the actual release. On the other hand, <code>caffe-git</code> (or
generally any package ending with <code>-git</code> for that matter) uses the latest
commit from the default branch, generally the <code>master</code>. And there is a 136
commits difference, affecting 100 files, as can be seen in this
<a href="https://github.com/BVLC/caffe/compare/1.0...9b891540183ddc834a02b2bd81b31afae71b2153">diff</a>,
with snapshot for the time of writing, again.</p>
<p>There are a lot of other caffe versions in the AUR, but I omit them as they
all target some specific GPU hardware, like nVidia CUDA. I will focus only
on CPU version of caffe. For me, the git version builds and works nicely,
but the release version does not. The error that halts the build process
for the release caffe ends with the following:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>CXX tools/extract_features.cpp
</span><span>CXX/LD -o .build_release/tools/extract_features.bin
</span><span>/usr/bin/ld: .build_release/tools/extract_features.o: in function `int feature_extraction_pipeline<float>(int, char**)':
</span><span>extract_features.cpp:(.text._Z27feature_extraction_pipelineIfEiiPPc[_Z27feature_extraction_pipelineIfEiiPPc]+0x37c): undefined reference to `caffe::Net<float>::CopyTrainedLayersFrom(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)'
</span><span>collect2: error: ld returned 1 exit status
</span><span>make: *** [Makefile:638: .build_release/tools/extract_features.bin] Error 1
</span><span>make: Leaving directory '/home/peterbabic/.cache/yay/caffe/src/caffe-1.0'
</span></code></pre>
<p>After a lot of experimenting (read below) this error went away eventually
and I was also able to build this release version testing backward, so I
probably got some OpenCV dependencies exactly right over time, but I cannot
pinpoint what has changed, although it could be helpful. Anyway, this is
basically the least feature-rich version of caffe from all the explored
ones, so it is not such a big deal.</p>
<h2 id="stirring-caffe-with-a-fork">Stirring caffe with a fork</h2>
<p>As I said moments ago, the official caffe version does not really appear to
be maintained anymore. But this does not mean the actual need for
improvements disappeared. There is a significant development happening in
the fork of caffe in the <code>ssd</code> branch of repository
<a href="https://github.com/weiliu89/caffe/tree/ssd">weiliu89/caffe</a>. And SSD here
does not stand for the storage at all. Instead, SSD here stands for Single
Shot Detector, or more specifically Single Shot MultiBox Detector. SSD was
developed to reduce the computation resources needed, so the model could
run on embedded devices, such as autonomous vehicles. The SSD's primary
author, Wei Liu, started the fork during his Google internship and it looks
like it made a dent in the world, too.</p>
<p>I wanted to use the SSD enabled caffe fork for some experimenting, but
whatever I tried, I couldn't find a way to make it build. The errors were
looking like this:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>In file included from /usr/include/c++/11.1.0/ext/string_conversions.h:41,
</span><span> from /usr/include/c++/11.1.0/bits/basic_string.h:6594,
</span><span> from /usr/include/c++/11.1.0/string:55,
</span><span> from ./include/caffe/util/hdf5.hpp:4,
</span><span> from src/caffe/util/hdf5.cpp:1:
</span><span>/usr/include/c++/11.1.0/cstdlib:75:15: fatal error: stdlib.h: No such file or directory hpp:
</span><span> 75 | #include_next <stdlib.h>
</span><span> | ^~~~~~~~~~
</span><span>compilation terminated.
</span><span>make: *** [Makefile:580: .build_release/src/caffe/util/hdf5.o] Error 1
</span></code></pre>
<p>The above error is seemingly related to the build process <code>-isystem</code> versus
just <code>-I</code> parameter. Here are some relevant references can be found
<a href="https://github.com/Martchus/tageditor/issues/22#issuecomment-330698400">here</a>,
<a href="https://bbs.archlinux.org/viewtopic.php?id=213339">here</a> and subsequently
<a href="https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70129">here</a>, then
<a href="https://github.com/OxfordSKA/OSKAR/issues/10#issuecomment-389318380">here</a>,
<a href="https://stackoverflow.com/a/54209536/1972509">here</a> and even
<a href="https://githubmemory.com/repo/pcb2gcode/pcb2gcode/issues/587#reply-665026">here</a>,
but the list might go on for long.</p>
<p>With the above problem solved, other one showed up:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>src/caffe/util/im_transforms.cpp:2:10: fatal error: opencv2/highgui/highgui.hpp: No such file or directory
</span><span> 2 | #include <opencv2/highgui/highgui.hpp>
</span><span> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</span><span>compilation terminated.
</span><span>make: *** [Makefile:580: .build_release/src/caffe/util/im_transforms.o] Error 1
</span></code></pre>
<p>And the tightly related one as well, just truncated:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>src/caffe/util/io.cpp:13:10: fatal error: opencv2/core/core.hpp: No such file or directory
</span></code></pre>
<p>This was further solved with tweaking <code>INCLUDE_DIRS</code> as hinted
<a href="https://github.com/weiliu89/caffe/issues/300#issuecomment-439304369">here</a>,
<a href="https://github.com/NVIDIA/DIGITS/issues/156#issuecomment-219089383">here</a>
and <a href="https://bbs.archlinux.org/viewtopic.php?id=223497">here</a>. I think at
this point I was able to build caffe with <code>make</code>.</p>
<h2 id="a-note-on-cmake-and-opencv-4">A note on Cmake and OpenCV 4</h2>
<p>However, before I was able to do away with make, for which a working
PKGBUILDs are readily available in AUR, as discussed above, I was able to
make a community supported <code>Cmake</code> build run. But not before I got over
problems like:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>CAP_PROP_POS_FRAMES was not declared in this scope.
</span></code></pre>
<p>More details can be found <a href="https://github.com/BVLC/caffe/pull/1667">here</a>.
There were some patching needed first, explained
<a href="https://github.com/weiliu89/caffe/issues/982#issue-437220408">here</a>,
<a href="https://github.com/BVLC/caffe/issues/6680#issuecomment-622989343">here</a>,
then <a href="https://github.com/BVLC/caffe/pull/6625">here</a> and very briefly
<a href="https://stackoverflow.com/questions/57982505/opencv-4-cap-prop-pos-frames-was-not-declared-in-this-scope">here</a>.</p>
<p>I know that <code>Cmake</code> build and OpenCV 4 are not coupled, but I did not mark
the exact error messages to put them here for a reference. The patches
applied simply work for <code>make</code> and <code>Cmake</code> builds alike and Arch has OpenCV
4 in repositories for quite some time already, so I put them here together.</p>
<p>Even though I was able to build <code>Cmake</code> sooner, the official <code>make</code> build
is much neatly organized and compiles everything caffe offers including
documentation and all of that is being utilized in the PKGBUILD, whereas
<code>Cmake</code> just builds the binaries so I decided not to utilize <code>Cmake</code> path
further.</p>
<h2 id="a-note-on-atlas-lapack-and-numpy">A note on atlas-lapack and NumPy</h2>
<p>There is a convoluted world of mathematical libraries bearing names BLAS,
Atlas, lapack, lapacke, OpenBLAS and Intel MKL. The features they offer do
overlap to some degree, and in many Linux distributions the user can decide
which implementation to use. This situation appears not to be too great on
Arch, however.</p>
<p>For instance, caffe expects the Atlas implementation by default, but
getting there on Arch is very hard, or maybe even impossible at this point,
as I could not get it to work at all. Atlas is only available from AUR as
<a href="https://aur.archlinux.org/packages/atlas-lapack">atlas-lapack</a> package and
is a total <em>pain</em> to
<a href="https://aur.archlinux.org/packages/atlas-lapack/?O=20&PP=10#comment-599526">get it installed</a>.
Not to mention that the build itself took the better part of the day to
finish on my machine, once I was even able to start it. I do not recommend
going down this path at all!</p>
<p>The reason is that other Python packages cease to work with this
implementation:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>ImportError: libopenblas.so.3: cannot open shared object file: No such file or directory
</span></code></pre>
<p>I am not too sure the exact relation to NumPy because I was at this step
even before I made <code>Cmake</code> build work, but I believe there was some
connection. Other error I encountered was:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>Could NOT find NumPy (missing: NUMPY_INCLUDE_DIR NUMPY_VERSION) (Required is at least version "1.7.1")
</span></code></pre>
<p>The problems I've had during this phase are discussed
<a href="https://bbs.archlinux.org/viewtopic.php?id=251561">here</a> and here
<a href="https://groups.google.com/g/caffe-users/c/2IUNF6xd0wM">here</a>. The
conclusion is to avoid using Atlas implementation, as OpenBLAS is proven to
work. Note that I did not experiment with Intel MKL at all yet, but It is
the third contender in this area
<a href="https://caffe.berkeleyvision.org/installation.html#cuda-and-blas">supported by caffe</a>.</p>
<h2 id="a-note-on-draw-net-py">A note on draw_net.py</h2>
<p>Here we are getting to the core of this not too interesting story. The
reason I wanted the SSD version of caffe to work was actually a related
script shipped alongside with it, called <code>draw_net.py</code> used to visualize
the caffe model specified in the <code>.prototxt</code> file. Visualizing it like this
makes it easier to understand what is going on inside the model. This
script is available in the vanilla caffe as well, but when applied on the
SSD model like <a href="https://github.com/chuanqi305/MobileNet-SSD">Mobilenet-SSD</a>
it terminates with the error:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>google.protobuf.text_format.ParseError: 1177:3 : Message type "caffe.LayerParameter" has no field named "permute_param".
</span></code></pre>
<p>The solution to this is obviously extend the parameters the <code>draw_net.py</code>
script is able to process with the parameters used in the SSD model in the
first place, thus installing the SSD branch of the caffe. This turned out
to be exponentially more complicated than I previously thought (took me
almost a week). However, even after a successful build of the SSD branch,
the <code>draw_net.py</code> still shown an error:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>AttributeError: 'google.protobuf.pyext._message.RepeatedScalarConta' object has no attribute '_values'
</span></code></pre>
<p>The solution is to patch the <code>draw.py</code> source file, as described
<a href="https://github.com/BVLC/caffe/issues/3698#issuecomment-258759498">here</a>.
Now I was finally able to fully visualize the model. What a ride.</p>
<p>The package <code>caffe-ssd</code> is now available in
<a href="https://aur.archlinux.org/packages/caffe-ssd/">AUR</a>.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://arxiv.org/pdf/1704.04861">https://arxiv.org/pdf/1704.04861</a></li>
<li><a href="https://arxiv.org/pdf/1512.02325">https://arxiv.org/pdf/1512.02325</a></li>
<li><a href="https://www.edge-ai-vision.com/2020/10/rel-time-vehicle-detection-with-mobilenet-ssd-and-xailient/">https://www.edge-ai-vision.com/2020/10/rel-time-vehicle-detection-with-mobilenet-ssd-and-xailient/</a></li>
<li><a href="https://bbs.archlinux.org/viewtopic.php?id=223497">https://bbs.archlinux.org/viewtopic.php?id=223497</a></li>
</ul>
Install tfjs-node from source2021-11-15T00:00:00+00:002021-11-15T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/install-tfjs-node-from-source/<p>When starting with TensorFlow library bindings for NodeJS, for instance by
installing:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">npm</span><span> i @tensorflow/tfjs-node
</span></code></pre>
<p>And then importing it inside a node module:</p>
<pre data-lang="javascript" style="background-color:#2b303b;color:#c0c5ce;" class="language-javascript "><code class="language-javascript" data-lang="javascript"><span style="color:#b48ead;">import </span><span style="color:#d08770;">* </span><span style="color:#b48ead;">as </span><span style="color:#bf616a;">tf </span><span style="color:#b48ead;">from </span><span>"</span><span style="color:#a3be8c;">@tensorflow/tfjs-node</span><span>"
</span></code></pre>
<p>The following error can be seen:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:
</span><span> AVX2 FMA
</span><span>To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
</span></code></pre>
<p>The warning can be dismissed in TypeScript with:</p>
<pre data-lang="typescript" style="background-color:#2b303b;color:#c0c5ce;" class="language-typescript "><code class="language-typescript" data-lang="typescript"><span style="color:#b48ead;">import </span><span style="color:#bf616a;">os
</span><span style="color:#bf616a;">os</span><span>.</span><span style="color:#bf616a;">environ</span><span>['</span><span style="color:#a3be8c;">TF_CPP_MIN_LOG_LEVEL</span><span>'] = '</span><span style="color:#a3be8c;">2</span><span>'
</span></code></pre>
<p>When using JavaScript the above won't work, but the export below works for
both:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#b48ead;">export </span><span style="color:#bf616a;">TF_CPP_MIN_LOG_LEVEL</span><span>=</span><span style="color:#a3be8c;">2
</span></code></pre>
<p>Anyway, since dismissing the error without digging deeper is not what we
usually do here, let's look at how to rebuild TensorFlow with appropriate
compiler flags (the proper solution).</p>
<h2 id="building-tensorflow-from-source">Building TensorFlow from source</h2>
<p>The steps are documented in the official <code>tfjs</code> repository under the
anchor:</p>
<p><a href="https://github.com/tensorflow/tfjs/tree/master/tfjs-node#optional-build-optimal-tensorflow-from-source">Optional: Build optimal TensorFlow from source</a></p>
<p>At first, it appears it is just a few steps, but the situation is
definitely more dire. Don't worry, this guide should help.</p>
<h3 id="step-1-clone-the-official-tensorflow-repository">Step 1: Clone the official TensorFlow repository</h3>
<p>First, get the official TensorFlow repository. It is quite large by the
way:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> clone https://github.com/tensorflow/tensorflow
</span><span style="color:#96b5b4;">cd</span><span> tensorflow
</span></code></pre>
<p>Instructions in the next steps are all executed inside this directory,
unless otherwise noted.</p>
<h3 id="step-2-install-bazel">Step 2: Install bazel</h3>
<p>What is bazel? One definition I have found is the following:</p>
<blockquote>
<p>Correct, reproducible, and fast builds for everyone</p>
</blockquote>
<p>Well, there are many build tools and this is one among them. Let's try:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -S</span><span> bazel
</span></code></pre>
<p>This will install the official version from the <code>community</code> repository, at
the time of writing it is <code>4.2.0</code>. It installs of course, but for our
purposes, it does not appear to be a correct choice, as the following
errors appears when trying to build TensorFlow:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>WARNING: current bazel installation is not a release version.
</span><span>Make sure you are running at least bazel 3.7.2
</span></code></pre>
<p>Or, a more elaborate one:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>ERROR: The project you're trying to build requires Bazel 3.7.2 (specified in /home/peterbabic/throw/tensorflow/.bazelversion), but it wasn't found in /usr/bin.
</span><span>
</span><span>Bazel binaries for all official releases can be downloaded from here:
</span><span> https://github.com/bazelbuild/bazel/releases
</span><span>
</span><span>Please put the downloaded Bazel binary into this location:
</span><span> /usr/bin/bazel-3.7.2-linux-x86_64
</span></code></pre>
<p>There are two AUR packages marked precisely with the version <code>3.7.2</code>, a
<a href="https://aur.archlinux.org/packages/bazel3/">bazel3</a> and
<a href="https://aur.archlinux.org/packages/bazel3-bin/">bazel3-bin</a>.</p>
<p>The former required importing GPG key manually via:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">gpg --keyserver</span><span> keys.openpgp.org</span><span style="color:#bf616a;"> --recv-keys</span><span> 3D5919B448457EE0
</span></code></pre>
<p>The latter worked for me.</p>
<blockquote>
<p><strong>Caution:</strong> always inspect contents of the AUR packages before
installing!</p>
</blockquote>
<p>Check the bazel version, just to be sure:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">bazel --version
</span><span style="color:#65737e;"># bazel 3.7.2
</span></code></pre>
<p>Not sure how to get it work with the official release version at this
point, though.</p>
<h3 id="step-3-adjusting-java-settings">Step 3: Adjusting Java settings</h3>
<p>Just installing bazel might still not be enough, especially if multiple
Java versions are present on the machine. Mine had installed <code>jdk-openjdk</code>
which at the time of writing was sitting at the version 17. There are
multiple hints, the most obvious is this possible error:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>Extracting Bazel installation...
</span><span>FATAL: Could not find system javabase. Ensure JAVA_HOME is set, or javac is on your PATH.
</span></code></pre>
<p>Where could <code>javac</code> reside? It is possible to find out:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pacman -F</span><span> javac
</span></code></pre>
<p>Gets us some hints:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>extra/bash-completion 2.11-1
</span><span> usr/share/bash-completion/completions/javac
</span><span>extra/java-environment-common 3-3 [installed]
</span><span> usr/bin/javac
</span><span>extra/jdk11-openjdk 11.0.10.u9-1 [installed: 11.0.13.u8-1]
</span><span> usr/lib/jvm/java-11-openjdk/bin/javac
</span><span>extra/jdk7-openjdk 7.u261_2.6.22-1
</span><span> usr/lib/jvm/java-7-openjdk/bin/javac
</span><span>extra/jdk8-openjdk 8.u282-1 [installed: 8.u292-1]
</span><span> usr/lib/jvm/java-8-openjdk/bin/javac
</span><span>extra/jre-openjdk-headless 15.0.2.u7-1 [installed: 17.u35-1]
</span><span> usr/lib/jvm/java-15-openjdk/bin/javac
</span></code></pre>
<p>I was confused at thins point, as many sources suggested this wrong value:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#b48ead;">export </span><span style="color:#bf616a;">JAVA_HOME</span><span>=</span><span style="color:#a3be8c;">/usr/lib/jvm/default
</span></code></pre>
<p>Which produced this error:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>WARNING: Ignoring JAVA_HOME, because it must point to a JDK, not a JRE.
</span></code></pre>
<p>Another useful hint was there when installing bazel from the official
repository:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>Packages (4) jdk11-openjdk-11.0.13.u8-1 jre11-openjdk-11.0.13.u8-1
</span><span> jre11-openjdk-headless-11.0.13.u8-1 bazel-4.2.0-2
</span></code></pre>
<p>It installed <code>jdk11-openjdk</code> family as dependencies. This can be further
confirmed:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">yay -Qi</span><span> bazel3 | </span><span style="color:#bf616a;">grep -i</span><span> depend
</span><span style="color:#65737e;"># Depends On : java-environment=11
</span></code></pre>
<p>Installed bazel requires JDK11, so I opted for the following:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#b48ead;">export </span><span style="color:#bf616a;">JAVA_HOME</span><span>=</span><span style="color:#a3be8c;">/usr/lib/jvm/java-11-openjdk
</span></code></pre>
<p>Bingo! It worked.</p>
<blockquote>
<p><strong>Note:</strong> to adjust the working Java environment consult
<code>archlinux-java help</code>.</p>
</blockquote>
<h3 id="step-4-configure-the-package">Step 4: Configure the package</h3>
<p>Here it get's tricky:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>./configure
</span></code></pre>
<p>When pressing ENTER, the following happens:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>You have bazel 3.7.2 installed.
</span><span>Please specify the location of python. [Default is /usr/bin/python3]:
</span><span>
</span><span>
</span><span>Found possible Python library paths:
</span><span> /usr/lib/python3.9/site-packages
</span><span>Please input the desired Python library path to use. Default is [/usr/lib/python3.9/site-packages]
</span><span>
</span><span>Do you wish to build TensorFlow with ROCm support? [y/N]:
</span><span>No ROCm support will be enabled for TensorFlow.
</span><span>
</span><span>Do you wish to build TensorFlow with CUDA support? [y/N]:
</span><span>No CUDA support will be enabled for TensorFlow.
</span><span>
</span><span>Do you wish to download a fresh release of clang? (Experimental) [y/N]:
</span><span>Clang will not be downloaded.
</span><span>
</span><span>Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -Wno-sign-compare]:
</span><span>
</span><span>
</span><span>Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]:
</span><span>Not configuring the WORKSPACE for Android builds.
</span><span>
</span><span>Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.
</span><span> --config=mkl # Build with MKL support.
</span><span> --config=mkl_aarch64 # Build with oneDNN and Compute Library for the Arm Architecture (ACL).
</span><span> --config=monolithic # Config for mostly static monolithic build.
</span><span> --config=numa # Build with NUMA support.
</span><span> --config=dynamic_kernels # (Experimental) Build kernels into separate shared objects.
</span><span> --config=v1 # Build with TensorFlow 1 API instead of TF 2 API.
</span><span>Preconfigured Bazel build configs to DISABLE default on features:
</span><span> --config=nogcp # Disable GCP support.
</span><span> --config=nonccl # Disable NVIDIA NCCL support.
</span><span>Configuration finished
</span></code></pre>
<p>But we need to adjust the flags to get the instructions support, remember?
Without modifying anything, we would end up even worse than with what we
started:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:
</span><span> SSE3 SSE4.1 SSE4.2 AVX AVX2 FMA
</span><span>To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
</span></code></pre>
<p>Now we have 6 possible CPU instructions not utilized instead of just two
with the release grade <code>tfjs-node</code> package. The trick is to use the correct
flags during the <code>--config=opt</code> question:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -Wno-sign-compare]: -mavx -mavx2 -mfma -msse3 -msse4.1 -msse4.2
</span></code></pre>
<p>The sad part here is, I have no idea how to detect the parameters
beforehand. I had to compile without the flags and then recompile with all
the ones that are reportedly missing. If you know how to do it reliably in
one go, please let me know.</p>
<h3 id="step-5-build-the-libtensorflow-package">Step 5: Build the libtensorflow package</h3>
<p>The build, no matter the flags specified is initiated like this:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">bazel</span><span> build</span><span style="color:#bf616a;"> --config</span><span>=opt</span><span style="color:#bf616a;"> --config</span><span>=monolithic //tensorflow/tools/lib_package:libtensorflow
</span></code></pre>
<p>The build process produces a cryptic output ending with this mess
(truncated):</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>DEBUG: Repository io_bazel_rules_docker instantiated at:
</span><span> /home/peterbabic/throw/tensorflow/WORKSPACE:23:14: in <toplevel>
</span><span> /home/peterbabic/throw/tensorflow/tensorflow/workspace0.bzl:108:34: in workspace
</span><span> /home/peterbabic/.cache/bazel/_bazel_peterbabic/0a4f750584c5f2d6b197cb4128047fc4/external/bazel_toolchains/repositories/repositories.bzl:35:23: in repositories
</span><span>Repository rule git_repository defined at:
</span><span> /home/peterbabic/.cache/bazel/_bazel_peterbabic/0a4f750584c5f2d6b197cb4128047fc4/external/bazel_tools/tools/build_defs/repo/git.bzl:199:33: in <toplevel>
</span><span>INFO: Analyzed target //tensorflow/tools/lib_package:libtensorflow (0 packages loaded, 0 t
</span><span>argets configured).
</span><span>INFO: Found 1 target...
</span><span>Target //tensorflow/tools/lib_package:libtensorflow up-to-date:
</span><span> bazel-bin/tensorflow/tools/lib_package/libtensorflow.tar.gz
</span><span>INFO: Elapsed time: 10752.573s, Critical Path: 516.72s
</span><span>INFO: 4050 processes: 166 internal, 3884 local.
</span><span>INFO: Build completed successfully, 4050 total actions
</span></code></pre>
<p>Not very interesting. We can see it took just a few seconds short of full
three hours. Apart from that, there are some hints about where the build
files actually reside, as to my surprise it was not anywhere near the
<code>tensorflow</code> repository folder. In fact, even the output did not help me
too much due to the directory structure. I had to do the following:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">fd -HI</span><span> libtensorflow.tar.gz </span><span style="color:#bf616a;">~
</span><span style="color:#65737e;">#/home/peterbabic/.cache/bazel/_bazel_peterbabic/0a4f750584c5f2d6b197cb4128047fc4/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/tools/lib_package/libtensorflow.tar.gz
</span></code></pre>
<p>Or in short, the file we look for is <em>very</em> deep inside the
<code>~/.cache/bazel</code> directory.</p>
<h3 id="step-5-replace-tfjs-node-dependencies">Step 5: Replace tfjs-node dependencies</h3>
<p>The last step is get the compiled dependencies into the project. Adapt the
following lines as needed:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">cp ~</span><span>/long-bazel-path/libtensorflow.tar.gz </span><span style="color:#bf616a;">~</span><span>/myproject/node_modules/@tensorflow/tfjs-node/deps
</span><span style="color:#96b5b4;">cd </span><span style="color:#bf616a;">~</span><span>/myproject/node_modules/@tensorflow/tfjs-node/deps
</span><span style="color:#bf616a;">tar -xf</span><span> libtensorflow.tar.gz
</span></code></pre>
<p>Now the node project should not report the error and the TensorFlow use
should be as efficient on your hardware as possible. Some users reported
speed increase in the ranges from 2x to 30x. I do not have any data on this
yet, but if true, it is definitely worth considering going through all this
hassle. Anyway, hope this was useful to you and if not, maybe you at least
learned something new. Enjoy!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://stackoverflow.com/questions/47068709/your-cpu-supports-instructions-that-this-tensorflow-binary-was-not-compiled-to-u">https://stackoverflow.com/questions/47068709/your-cpu-supports-instructions-that-this-tensorflow-binary-was-not-compiled-to-u</a></li>
<li><a href="https://stackoverflow.com/questions/41293077/how-to-compile-tensorflow-with-sse4-2-and-avx-instructions?rq=1">https://stackoverflow.com/questions/41293077/how-to-compile-tensorflow-with-sse4-2-and-avx-instructions?rq=1</a></li>
<li><a href="https://www.tensorflow.org/install/source">https://www.tensorflow.org/install/source</a></li>
<li><a href="https://bbs.archlinux.org/viewtopic.php?id=222751">https://bbs.archlinux.org/viewtopic.php?id=222751</a></li>
</ul>
Five differences between blog and microblog2021-11-09T00:00:00+00:002021-11-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/five-differences-betwen-blog-and-microblog/<p>Are you using Twitter? Or maybe you explore the depths of the
<a href="https://fediverse.party/">Fediverse</a>, for instance through
<a href="https://mastodon.peterbabic.dev/@peter">Mastodon</a>. Or maybe you use both.
These services tend to be called microblogging platforms. And maybe you
also have your own blog. And if not, at the very least, you are reading
this, published on my blog. So what are the differences between a blog and
a microblog, anyway?</p>
<h2 id="1-place-of-publication">1. Place of publication</h2>
<p>The blog is generally published on the web, under some domain or maybe a
<code>blog.</code> subdomain. Yeah, this is just a basic post, but maybe you'll stick
with me till the end. Microblog on the other hand is published in the
stream. What the hell is a stream?</p>
<p>Well maybe stream is just a made up term. It surely isn't defined in
scientific literature about the topic, if there even is any such literature
in the first place. But I image a stream in this context to be pieces
coming in a succession, and that is exactly what publishing content on the
microblog is. Even flowing water are molecules (pieces) moving in a
succession.</p>
<h2 id="2-the-length">2. The length</h2>
<p>Length of the published content does not need to much explanation, but it
is a very relevant factor. While blogs are usually long posts, microblog
posts are on the other hand rather short. Twitter itself had an
artificially set 140 character limit on a tweet for quite a while, until it
got upgraded to 280 characters.</p>
<p>Fediverse has a little bit different take on the issue. Mastodon has by
default 500 character limit on a toot (a toot is an equivalent of a tweet
but in the Fediverse). 500 characters is well enough for a short posts, but
some users did not like it's hard-coded nature and so, for instance in
Pleroma this limit is configurable by the instance administrator on the
fly. Some users have it set to very high numbers, like 8000 - basically
imposing almost no limit on the length of the post, which blurs the line a
little.</p>
<h2 id="3-amount-of-effort-required">3. Amount of effort required</h2>
<p>With the length of the content increasing, surely the required effort
increases as well. Unless the author is stealing it or is it machine
produced somehow. With that being said, a full blog post, apart from
actually writing it also tend to require sources, photos, screenshots,
charts and many times, also a lot of research.</p>
<p>This is absolutely contrary to the microblog post, which many times focus
on just one piece of information like a link or a GIF. This characteristic
is imposed by the character limit. But the effort tends to be rather
minimal, and thus authors can afford to toot or tweet many times a day.</p>
<h2 id="4-direction-of-the-communication">4. Direction of the communication</h2>
<p>Generally, this category is very tight in a sense that both blog and
microblog posts are truly just one-way only. Author writes, the audience
reads. But the surrounding environment is different. With microblog post,
any user on the platform can interact with the post with the same format
and the exact same tool. There is basically no hierarchy. Sure, the is the
original post and then there are replies, but there is no hierarchy among
users. Everyone is on the level ground. This in my opinion strengthen the
sense of connection, one of the needs of the individual.</p>
<p>Blog posts are different in this respect, because the best one can do is
usually to insert the comment section below the blog post. While this
indeed solves a lot of problems, it also creates a bunch of them in the
same time. The comments do not propagate trough the network very well, as
they are endemic to just one URL. Comments can be affected by bots of not
handled right. Then there is a problem of identity. Users either need to
register on every single site or if some other company is handling comments
for you, then there is a potential for a privacy issues. And do not even
get me started on the issue of comments on statically generated sites,
which are getting more and more popular as a JAMstack. This topic could
probably fill it's own blog.</p>
<h2 id="5-thematic">5. Thematic</h2>
<p>The last point is this post is a topic of the content. Blogs tend to have a
chosen topic. And yes, even personal journal kind of blog has a topic. The
topic is being the life of an author. There is a sense of continuity. Of
course, as everywhere, there are exceptions here and there, but in general,
well curated blogs tend to stick to the topic, otherwise readers can get
discouraged.</p>
<p>While microblog posts can have a topic and in an overall picture they very
much do. In the end, the human brain is really terrible at generating
randomness. This is why you should always use a password manager like
KeePassXC to generate the passwords or passphrases for you. You can even
have the passwords
<a href="/blog/sync-keepass-passwords-between-computer-phone/">synchronized with the phone</a>.</p>
<p>But in the short time scales, the microblog posts may even appear to be
very diverse, even random. A link there, a photo there, an anecdote from
the author's life in between.</p>
<h1 id="conclusion">Conclusion</h1>
<p>The post explains 5 differences between a blog post proper and the
microblog post - a tweet or a toot for instance. Here is a chart that
summarizes all the differences briefly:</p>
<table><thead><tr><th>Blogging</th><th>Microblogging</th></tr></thead><tbody>
<tr><td>published on the web</td><td>published in stream</td></tr>
<tr><td>medium to long form</td><td>short form</td></tr>
<tr><td>a lot of effort</td><td>minimal effort</td></tr>
<tr><td>unidirectional</td><td>interactivity</td></tr>
<tr><td>topic oriented</td><td>diversity</td></tr>
</tbody></table>
<p>We can see that the blog posts tend to be topic oriented, longer, higher
effort pieces published directly on the web, for the readers to consume.
Microblog posts on the other hand tend to be short, lower effort, diverse
pieces published inside a platform that prompts interactivity. So that's
it.</p>
<p>I recommend you to get over to the Fediverse and leave me some toot there,
it is where I currently spend some of my free time.</p>
A sad downturn for my OnlyOffice setup2021-11-05T00:00:00+00:002021-11-05T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/sad-downturn-for-my-onlyoffice-setup/<p>Today I needed to have a collaboration document feature available via a
simple link, so anyone with it could edit it and add some thoughts into the
document. With
<a href="/blog/install-nextcloud-onlyoffice-postgres/">my NextCloud setup</a> it
should be no problem, I thought.</p>
<p>In fact, it was not a problem at all, just create a document, enable
editing and copy-paste the link into the email. I upgraded the NextCloud to
version 22 a fairly recently. With it, maybe OnlyOffice got upgraded too,
but I still do not understand the stack entirely to be sure.</p>
<p>All my previous experiences with OnlyOffice were very positive. I have
<a href="/blog/onlyoffice-proved-to-be-useful/">posted some praise</a> about it
already, and I was confident the software is usable. Yet, no software is
bug free, unfortunately.</p>
<p>The client responded that there is an error...</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>An error occurred during the work with the document.
</span><span>Use the 'Download as...' option to save the file backup to your computer hard drive
</span></code></pre>
<p>This is the dialog box that I was also able to experience, and it is pretty
annoying. There is apparently no data loss, but the dialog box cannot be
dismissed from the user interface. In fact, I have no idea how to dismiss
it, it started working again for me and for the client some 10 minutes
later, without any obvious change.</p>
<p>Another solution has to be used as there is clearly no evident way out.
This feels really unfortunate.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://github.com/ONLYOFFICE/DocumentServer/issues/833">https://github.com/ONLYOFFICE/DocumentServer/issues/833</a></li>
</ul>
Using electronic ID on Arch in Slovakia pt.22021-11-03T00:00:00+00:002021-11-03T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/using-electronic-id-on-arch-in-slovakia-pt2/<p>A few days prior I have finally
<a href="/blog/using-electronic-id-on-arch-in-slovakia/">received my electronic ID or eID</a>
that can be used to streamline the communication with various official
bureaus in the country. Since it is Java based, it has a support for
multiple OSes by default, including Linux. Although, as I stated in the
previous article, officially only Debian, Ubuntu and Mint distributions are
supported, I was able to use all the functionality that were within my
reach on Arch as well. It just requires a little bit of configuration, as
is the norm with this rolling release, cutting edge distro.</p>
<p>The required packages I had to have installed at the time of writing on a
fully updated Arch, some of them are available on AUR:</p>
<ul>
<li>Card reader via <code>pcsclite</code> and <code>ccid</code></li>
<li>Java 8 via <code>jre8-openjdk</code> and <code>jre8-openjdk-headless</code></li>
<li>Java 8 JFX via <code>java8-openjfx</code></li>
<li>IcedTea via <code>icedtea-web</code></li>
<li>eID client via <code>eidklient</code></li>
<li>web signer via <code>disig-web-signer</code></li>
</ul>
<p>The above can be installed with the following:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">yay -S</span><span> pcsclite ccid jre8-openjdk jre8-openjdk-headless \
</span><span> java8-openjfx icedtea-web eidklient disig-web-signer
</span></code></pre>
<p>Next download, extract, mark as executable and run <code>D.Launcher</code>. At the
time of writing the version <code>1.1.0.1a</code> was available for GNU/Linux x64 at:</p>
<p><a href="https://www.slovensko.sk/static/zep/apps/DLauncher.linux.x86_64.zip">https://www.slovensko.sk/static/zep/apps/DLauncher.linux.x86_64.zip</a></p>
<p>As noted in the previous articles, start a <code>pcscd</code> service to access the
built in card reader:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> systemctl enable pcscd</span><span style="color:#bf616a;"> --now
</span></code></pre>
<p>There are two external card readers supplied by the bureau. One of them
might be requiring a driver - Search for <code>bit4id</code> in AUR.</p>
<p>Finally, create a file <code>~/.config/icedtea-web/deployment.properties</code> with
the following content:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>deployment.jre.dir=/usr/lib/jvm/java-8-openjdk
</span></code></pre>
<p>The above will be different depending on the distribution used.</p>
<h2 id="openjdk-location">OpenJDK location</h2>
<p>For Arch Linux, the folder where Java 8 resides can be double-checked with
pacman:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pacman -Fl</span><span> jre8-openjdk | </span><span style="color:#bf616a;">grep</span><span> lib
</span></code></pre>
<p>Which should output something very similar to:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>jre8-openjdk usr/lib/
</span><span>jre8-openjdk usr/lib/jvm/
</span><span>jre8-openjdk usr/lib/jvm/java-8-openjdk/
</span><span>jre8-openjdk usr/lib/jvm/java-8-openjdk/jre/
</span><span>jre8-openjdk usr/lib/jvm/java-8-openjdk/jre/bin/
</span><span>jre8-openjdk usr/lib/jvm/java-8-openjdk/jre/bin/policytool
</span><span>jre8-openjdk usr/lib/jvm/java-8-openjdk/jre/lib/
</span><span>jre8-openjdk usr/lib/jvm/java-8-openjdk/jre/lib/amd64/
</span><span>jre8-openjdk usr/lib/jvm/java-8-openjdk/jre/lib/amd64/libjsound.so
</span><span>jre8-openjdk usr/lib/jvm/java-8-openjdk/jre/lib/amd64/libjsoundalsa.so
</span><span>jre8-openjdk usr/lib/jvm/java-8-openjdk/jre/lib/amd64/libsplashscreen.so
</span></code></pre>
<p>Modify the content of the file above if needed, but the existing location
is not really going to change, unless a different Java version is used for
this stack in the future.</p>
<p>Anyway, this should be it, using official electronic communication should
be possible with the above steps, unless I missed something. It is quite
time consuming to completely re-check everything, as there are many steps
involved. And some steps like generating the initial certificates are done
only once or very infrequently. I am leaving it here in case someone
stumbles on it.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://platforma.slovensko.digital/t/debian-10-ubuntu-18-04-a-dsigner/6401/26">https://platforma.slovensko.digital/t/debian-10-ubuntu-18-04-a-dsigner/6401/26</a></li>
<li><a href="https://platforma.slovensko.digital/t/wip-otazky-a-odpovede-ohladom-podpisovania-d-signer-a-linux/6757">https://platforma.slovensko.digital/t/wip-otazky-a-odpovede-ohladom-podpisovania-d-signer-a-linux/6757</a></li>
</ul>
Confusion with dashes and underscores2021-10-28T00:00:00+00:002021-10-28T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/confusion-with-dashes-and-underscores/<p>This issue is very strange. I still did not comprehend what happened
exactly, but my brain tells me that somehow the docker-compose project
changed it's automatic instance name generation from using hyphens to
dashes.</p>
<p>I do not want to go and replicate the issue right now on the live server,
but I was able to track down the most relevant error message from the
search history:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>ActiveRecord::NoDatabaseError: could not translate host name "mastodon_db_1" to address: Name or service not known
</span></code></pre>
<p>What does that mean?</p>
<h2 id="how-i-got-here">How I got here</h2>
<p>I am really not sure. I was doing some server maintenance around docker
services and then the errors start to pour, even from the ones that I did
not touch. Now when all the issues are resolved I feel strange, because
things work but obviously I do not know exactly why.</p>
<p>On the other hand, no one can know everything and without professional
training I am left to tinker and hack around the docker world. And again,
lessons are learned until things start to work. The happy part of this is I
was able to get everything back to normal. The sad thing is I still do not
know what the hell has happened.</p>
<h2 id="tracing-the-problem">Tracing the problem</h2>
<p>The obvious part was that the Mastodon instance would not open in the web
tab with the HTTP 5xx error code. There were various repeating errors in
the docker log, but all of them were stemming from the error message above.
Yet I had no way to know which single one error message from that wall of
text was the culprit.</p>
<p>The obvious part was that the Mastodon instance would not open in the web
tab with the HTTP 5xx error code. There were various repeating errors in
the docker log, but all of them were stemming from the error message above.
Yet I had no way to know which single one error message from that wall of
text was the culprit.</p>
<h2 id="the-solution">The solution</h2>
<p>Fortunately I have noticed that error message and what it says is actually
quite clear. There is no host with name <code>mastodon_db_1</code> to connect to.
Right. All the other error messages were probably there because the
database is not accessible. I knew the services were defined in
<code>docker-compose.yml</code> file. And I knew equally well it worked few hours
before. And I knew I was not touching that folder at all, so something else
had changed.</p>
<p>What was especially striking was that the services the docker-compose were
started differently - in the form of <code>mastodon-db-1</code>. Ripgrepping around
the repository finally led me to the <code>.env.production</code> file that contained
the value <code>mastodon_db_1</code>. The solution?</p>
<pre data-lang="diff" style="background-color:#2b303b;color:#c0c5ce;" class="language-diff "><code class="language-diff" data-lang="diff"><span style="color:#bf616a;">- DB_HOST=mastodon_db_1
</span><span style="color:#a3be8c;">+ DB_HOST=mastodon-db-1
</span><span>
</span><span style="color:#bf616a;">- REDIS_HOST=mastodon_redis_1
</span><span style="color:#a3be8c;">+ REDIS_HOST=mastodon-redis-1
</span><span>
</span><span style="color:#bf616a;">- ES_HOST=mastodon_es_1
</span><span style="color:#a3be8c;">+ ES_HOST=mastodon-es-1
</span></code></pre>
<p>Similar changes needed to be done in other projects so I believe there had
to be some update in the docker ecosystem somewhere, but I was not able to
pinpoint it exactly yet. Searching does not show any breaking change in
past month or two. Maybe I do not search hard enough ...</p>
Bluetooth mouse unresponsive after boot2021-10-27T00:00:00+00:002021-10-27T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/bluetooth-mouse-unresponsive-after-boot/<p>I consider the trusty Logitech MX Master 3, the
<a href="/blog/the-most-useful-computer-mouse/">computer mouse I use</a> to be a great
addition to my toolset. Even though I try to use mouse as little as
possible in times when keyboard is the main input source (when I am writing
something), the mouse is still very important.</p>
<p>Do not get me wrong, I am probably just another sheep that jumped into
bandwagon of the people recommending this mouse, but I would not strive to
replace it, as it has all the features I need.</p>
<p>Specifically, the source switch is one such feature. The mouse has a button
on it's bottom that lets you switch the source, or the device it is
currently connected to. There are three options. Number 1 is for the USB
dongle, or Unifying Receiver as Logitech calls it. It is unifying because
it let's you connect multiple Logitech devices simultaneously, for instance
a mouse and a keyboard. I do not have keyboard yet, as I am still undecided
about the ones offered by Logitech.</p>
<p>But I own the mouse and with it, the Unifying Receiver as well. The thing
is, the dongle is not plugged in the laptop, but instead, I keep it plugged
in the ThinkPad dock, as I explained in the article above. When using
dongle (the source 1) that is plugged into laptop or into the dock, the
mouse works without any problem, even when dual booting.</p>
<h2 id="the-problem">The problem</h2>
<p>The situation changes when I am traveling. I obviously do not take the dock
with me, nor the dongle. The mouse has two more sources (numbered 2 and 3),
that allow to pair two Bluetooth devices to their respective numbers. So
when on the go, I just switch the source from dongle to Bluetooth and
everything's fine.</p>
<p>This is especially important when dual booting during the travel, as the
source number 3 is paired with another OS. The classic Bluetooth mouse that
only has one source is
<a href="https://wiki.archlinux.org/title/Bluetooth#Dual_boot_pairing">much harder to configure for dual boot</a>.</p>
<p>This fairy tale was going fine for some time, but maybe a two months ago I
started experiencing this annoying issue with the Bluetooth. After any kind
of resume from suspend or hibernate, or even from restart or cold start,
the mouse does connect as it should but is unresponsive immediately.
Yesterday I decided I find a solution for good, as it worked without an
issue for more than a year already, so it had to be possible.</p>
<h2 id="symptoms-in-logs">Symptoms in logs</h2>
<p>Apart from the most obvious symptom with unresponsive mouse, there are
other hints that something is amiss. Here are two eye-catching lines from
<code>journalctl -xe</code>:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>bluetoothd[411]: profiles/input/hog-lib.c:set_report_cb() Error setting Report value: Unexpected error code
</span></code></pre>
<p>And this one:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>bluetoothd[411]: profiles/network/connection.c:connect_cb() connect to 0C:CB:85:02:4E:D4: Host is down (112)
</span></code></pre>
<p>The situation in <code>dmesg</code> is not happy either. I've found this:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>[ 2065.385987] Bluetooth: hci0: Received unexpected HCI Event 00000000
</span><span>[ 2065.386013] Bluetooth: hci0: unexpected event for opcode 0x0000
</span><span>[ 2065.431367] Bluetooth: hci0: Received unexpected HCI Event 00000000
</span><span>[ 2065.622357] Bluetooth: hci0: Received unexpected HCI Event 00000000
</span><span>[ 2065.633500] Bluetooth: hci0: Received unexpected HCI Event 00000000
</span><span>[ 2067.354587] Bluetooth: hci0: FW download error recovery failed (-110)
</span><span>[ 2067.355066] Bluetooth: hci0: Hardware error 0x00
</span><span>[ 2069.488056] Bluetooth: hci0: Controller not accepting commands anymore: ncmd = 0
</span><span>[ 2069.488072] Bluetooth: hci0: Injecting HCI hardware error event
</span><span>[ 2077.380961] Bluetooth: hci0: Reset after hardware error failed (-110)
</span><span>[ 2077.492030] Bluetooth: hci0: Received unexpected HCI Event 00000000
</span><span>[ 2079.514574] Bluetooth: hci0: Reading Intel version information failed (-110)
</span><span>[ 2079.514587] Bluetooth: hci0: Intel Read version failed (-110)
</span><span>[ 2079.514585] Bluetooth: hci0: command 0x0c03 tx timeout
</span><span>[ 2081.647859] Bluetooth: hci0: command 0xfc05 tx timeout
</span><span>[ 2081.648039] Bluetooth: hci0: Intel reset sent to retry FW download
</span></code></pre>
<p>And also this:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>[ 2082.239431] Bluetooth: hci1: Bootloader revision 0.0 build 26 week 38 2015
</span><span>[ 2082.240460] Bluetooth: hci1: Device revision is 16
</span><span>[ 2082.240467] Bluetooth: hci1: Secure boot is enabled
</span><span>[ 2082.240471] Bluetooth: hci1: OTP lock is enabled
</span><span>[ 2082.240474] Bluetooth: hci1: API lock is enabled
</span><span>[ 2082.240476] Bluetooth: hci1: Debug lock is disabled
</span><span>[ 2082.240479] Bluetooth: hci1: Minimum firmware build 1 week 10 2014
</span><span>[ 2082.248290] Bluetooth: hci1: Found device firmware: intel/ibt-12-16.sfi
</span><span>[ 2084.151749] Bluetooth: hci1: Waiting for firmware download to complete
</span><span>[ 2084.152457] Bluetooth: hci1: Firmware loaded in 1859559 usecs
</span><span>[ 2084.152625] Bluetooth: hci1: Waiting for device to boot
</span><span>[ 2084.165433] Bluetooth: hci1: Device booted in 12616 usecs
</span><span>[ 2084.165710] Bluetooth: hci1: Found Intel DDC parameters: intel/ibt-12-16.ddc
</span><span>[ 2084.168419] Bluetooth: hci1: Applying Intel DDC parameters completed
</span><span>[ 2084.169468] Bluetooth: hci1: Reading supported features failed (-16)
</span><span>[ 2084.170482] Bluetooth: hci1: Firmware revision 0.1 build 212 week 30 2021
</span><span>[ 2085.744578] logitech-hidpp-device 0005:046D:B023.000D: Device not connected
</span></code></pre>
<p>This line popped up as well:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>[ 1338.544489] Bluetooth: hci0: urb 0000000066025a26 failed to resubmit (2)
</span></code></pre>
<p>All of the above could be found on the latest kernel available on Arch,
which is <code>5.14.14</code> currently. Affected all all three official mainline
kernels:</p>
<ul>
<li><a href="https://archlinux.org/packages/core/x86_64/linux/">linux</a></li>
<li><a href="https://archlinux.org/packages/extra/x86_64/linux-zen/">linux-zen</a></li>
<li><a href="https://archlinux.org/packages/extra/x86_64/linux-hardened/">linux-hardened</a></li>
</ul>
<p>The situation on LTS kernel
<a href="https://archlinux.org/packages/core/x86_64/linux-lts/">linux-lts</a> which
currently sits at version <code>5.10.75-1</code> is basically identical, which is
quite sad, as I believed the LTS kernel would simply solve this problem.</p>
<h2 id="temporary-workarounds">Temporary workarounds</h2>
<p>For the previous two months I had to do the following to get the mouse to
work on Bluetooth again.</p>
<h3 id="option-1-power-cycle-the-mouse">Option 1: Power cycle the mouse</h3>
<p>The most obvious fix is to turn the mouse off and on again via the upper
switch on it's bottom. The mouse reconnects and becomes responsive.
Annoying to do this many times a day.</p>
<h3 id="option-2-cycle-the-source">Option 2: Cycle the source</h3>
<p>Another thing that forces the mouse to reconnect is to press the source
button described above exactly three times. Since there are three sources,
it ends on the exactly the same source. This is exactly as annoying as the
previous option.</p>
<h3 id="option-3-gnome-bluetooth-gui">Option 3: Gnome Bluetooth GUI</h3>
<p>Navigating into Bluetooth settings in Gnome, disconnecting and
re-connecting the mouse via GUI switch also does the job. But since it is
in GUI, it requires many clicks with touchpad. Absolutely the worst option.</p>
<h3 id="option-4-bluetoothctl-cli">Option 4: bluetoothctl CLI</h3>
<p>Obviously, It is possible to use command line to resolve the issue. The
command <code>bluetoothctl</code> from the package
<a href="https://archlinux.org/packages/extra/x86_64/bluez-utils/">bluez-utils</a> can
do the job:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">bluetoothctl</span><span> -- disconnect MAC_ADDRESS
</span></code></pre>
<p>Which outputs the following:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>Attempting to disconnect from E7:B9:C9:15:DF:80
</span><span>[CHG] Device E7:B9:C9:15:DF:80 ServicesResolved: no
</span><span>Successful disconnected
</span></code></pre>
<p>MAC address of the mouse can be found for instance like this:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">bluetoothctl</span><span> -- devices | </span><span style="color:#bf616a;">grep</span><span> Master | </span><span style="color:#bf616a;">cut -d</span><span>' '</span><span style="color:#bf616a;"> -f2
</span></code></pre>
<p>The disconnect seems to be enough, the mouse reconnects itself back and
becomes responsive automatically. Not sure what to take out of it.</p>
<blockquote>
<p><strong>Note:</strong> using <code>bluetoothctl</code> in the interactive mode (just running the
command) offers completion for commands and MAC addresses too, using it
so this way it might be even faster when not inside a script.</p>
</blockquote>
<h2 id="what-did-not-work">What did not work</h2>
<p>I've found a reasonable solution, but before I got there, I had to try a
few things. For the completeness I include things that I tried that did not
work for me.</p>
<h3 id="fail-1-identityresolvingkey">Fail 1: IdentityResolvingKey</h3>
<p>First thing I tried multiple times already is documented in the
<a href="https://wiki.archlinux.org/title/Bluetooth#Problems_with_all_BLE_devices_on_kernel_5.9+">Arch Wiki</a>
and requires removing two lines in the
<code>/var/lib/bluetooth/adapter_mac/device_mac/info</code> file:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>[IdentityResolvingKey]
</span><span>Key=...
</span></code></pre>
<p>This is probably the most referred solution to similar problem with
Bluetooth mouse, but it did not made any difference for me.</p>
<h3 id="fail-2-usb-modeswitch">Fail 2: usb_modeswitch</h3>
<p>Another
<a href="https://wiki.archlinux.org/title/Bluetooth#Adapter_disappears_after_suspend/resume">solution in the wiki</a>
just next to the one above is to use
<a href="https://archlinux.org/packages/community/x86_64/usb_modeswitch/">usb_modeswitch</a>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> usb_modeswitch</span><span style="color:#bf616a;"> -R -v</span><span> vendor_ID</span><span style="color:#bf616a;"> -p</span><span> product_ID
</span></code></pre>
<h3 id="fail-3-modprobe-btusb">Fail 3: modprobe btusb</h3>
<p>This does not bring the mouse back to responsiveness but it is acutally a
very good way to force logs into <code>dmesg</code>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> rmmod btusb && </span><span style="color:#bf616a;">sudo</span><span> modprobe btusb
</span></code></pre>
<p>This was suggested on multiple places, for instance again in the
<a href="https://wiki.archlinux.org/title/Bluetooth#Foxconn_/_Hon_Hai_/_Lite-On_Broadcom_device">same Bluetooth Arch Wiki page</a>.</p>
<h3 id="fail-4-hciconfig">Fail 4: hciconfig</h3>
<p>This is a variation of the above
<a href="https://unix.stackexchange.com/a/602739/109352">taken from here</a>. It
requires
<a href="https://aur.archlinux.org/packages/bluez-hciconfig/">bluez-hciconfig</a>
which is deprecated. There is also a related
<a href="https://aur.archlinux.org/packages/bluez-hcitool/">bluez-hcitool</a>. The
solution suggests the following:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">hciconfig</span><span> hci0 down
</span><span style="color:#bf616a;">rmmod</span><span> btusb
</span><span style="color:#bf616a;">modprobe</span><span> btusb
</span><span style="color:#bf616a;">hciconfig</span><span> hci0 up
</span></code></pre>
<p>Did not work either.</p>
<h3 id="fail-4-tlp-usb-autosuspend">Fail 4: TLP USB autosuspend</h3>
<p>Some <a href="https://bugzilla.kernel.org/show_bug.cgi?id=203535">comments</a>
suggested the USB autosuspend might be the problem as Bluetooth might be
connected internally over USB, which is common. Unfortunately disabling the
<code>tlp</code> service does not change anything.
<a href="https://wiki.archlinux.org/title/TLP#hci0:_link_tx_timeout">Using a blacklist or USB_DENYLIST</a>
as <code>tlp</code> calls it has no effect.</p>
<h3 id="fail-5-btusb-enable-autosuspend-kernel-parameter">Fail 5: btusb.enable_autosuspend kernel parameter</h3>
<p>Another related solution suggestion that did not work for me is to use the
<code>btusb.enable_autosuspend=n</code> kernel parameter instead. Interesting read can
be found in this
<a href="https://unix.stackexchange.com/questions/645783/what-does-btusb-enable-autosuspend-n-really-do">StackExchange post</a>.</p>
<h3 id="fail-6-hid2hci">Fail 6: hid2hci</h3>
<p>The last obvious solution is to install
<a href="https://archlinux.org/packages/extra/x86_64/bluez-hid2hci/">bluez-hid2hci</a>
mentioned multiple times in the already cited Bluetooth Wiki page, for
instance
<a href="https://wiki.archlinux.org/title/Bluetooth#Logitech_Bluetooth_USB_Dongle">here</a>.
But this is more just a desperate move and I did not believe it would
change anything, which was the case anyway.</p>
<h2 id="the-solution">The solution</h2>
<p>I was thankfully able to find a solution to this painful issue in the end.
I have used the older LTS kernel from the AUR package
<a href="https://aur.archlinux.org/packages/linux-lts54/">linux-lts54</a> currently
sitting at version <code>5.4.155</code> and let it compile overnight.</p>
<p>This kernel does not appear to do any harm to my workflow and gracefully
solves the Bluetooth mouse issue. With this kernel on ThinkPad T470, the
<code>dmesg</code> is entirely clear without any obvious hiccups, so I might keep it
for a while and see how it goes. Hope that helps!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://bugzilla.kernel.org/show_bug.cgi?id=209745">https://bugzilla.kernel.org/show_bug.cgi?id=209745</a></li>
<li><a href="https://bbs.archlinux.org/viewtopic.php?id=259954">https://bbs.archlinux.org/viewtopic.php?id=259954</a></li>
<li><a href="https://www.reddit.com/r/pop_os/comments/lcknuj/bluetooth_keeps_disconnecting/">https://www.reddit.com/r/pop_os/comments/lcknuj/bluetooth_keeps_disconnecting/</a></li>
<li><a href="https://bugzilla.redhat.com/show_bug.cgi?id=1573562">https://bugzilla.redhat.com/show_bug.cgi?id=1573562</a></li>
<li><a href="https://www.reddit.com/r/Fedora/comments/qair1l/mxmaster_3_doesnt_reconnect_after_suspend_have_to/">https://www.reddit.com/r/Fedora/comments/qair1l/mxmaster_3_doesnt_reconnect_after_suspend_have_to/</a></li>
</ul>
I have finally configured DMARC today2021-10-14T00:00:00+00:002021-10-14T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/i-have-finally-configured-dmarc/<p>A time ago I got
<a href="/blog/white-hat-hacker-contacted-me/">contacted by a white-hat hacker</a>
with an inquiry about the security of one of my emails. Specifically, that
my DMARC record was not set up.</p>
<p>I immediately contacted my email provider, since I did not know if the
whole thing was even legit, but I did not get a very satisfactory answer.
Poking around I found that my DKIM and SPF records were set, presumably
properly. I did not have too much information about the whole problem
domain back then.</p>
<p>Since I had no idea what to do exactly, I did a simple risk analysis. With
the SPF and DKIM records set properly, but without the proper DMARC, an
attacker could possibly impersonate me via email, meaning they could send
an email that would appear to originate from me. Or so I still believe.</p>
<p>This was not that much of a threat, as I am no business and many businesses
are being run daily with possibly worse security problems, so I let it be.
My plan at that time was to switch a mail provider soon anyway, and I
thought the switch would change a thing or two, having the security related
changes in mind as well.</p>
<h2 id="using-txt-record-for-dmarc">Using TXT record for DMARC</h2>
<p>I was doing some cleaning (of files obviously) and stumbled upon the files
left over from the inquiry. Since I did not change the email provider yet,
the issue of no DMARC record on my email was still there. I thought I try
to fix it. In the end, it was not that difficult, but it took me some trial
and error. The simplest solution is to configure DNS to include a TXT
record for the <code>_dmarc.domain.com</code> like this:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>v=DMARC1; p=quarantine; pct=100; rua=mailto:dmarc@domain.com
</span></code></pre>
<p>I won't go into details about what the above means today. Two other notable
settings I came across are:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>v=DMARC1; p=reject; sp=none; pct=100; ri=86400; rua=mailto:dmarc@domain.com
</span></code></pre>
<p>And with forensics enabled:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>v=DMARC1; p=reject; fo=1; rua=mailto:dmarc@domain.com; ruf=mailto:dmarc@domain.com
</span></code></pre>
<p>There are a few links down below that could definitely help explaining what
is going on in case you stumble upon it here.</p>
<h2 id="closing-words">Closing words</h2>
<p>I might need to tweak the DMARC a little bit in the future, because in
technology, everything seems to constantly evolving, but for now my mails
are coming through and the DMARC seems to be set-up correctly. There is
however a multitude of options available and if misconfigured, they could
backfire and result in all emails being rejected. I wish I have learned
some tool that would work like unit tests but for email, simply to make
sure everything required is always working after some changes. This feeling
from programming is very addicting.</p>
<p>As to what the white-hat hacker did, hopefully they did not engage in a
revenge for not paying him for the disclosure. I explained that I am not a
business, so I have no revenue stream. In any means, I did not found any
suspicious activity whatsoever regarding the issue, but it could change now
that I have reporting (a feature of DMARC) set up.</p>
<p>But I must admire, that what they did, was quite inspiring, so to say. I
believe they scraped web for emails and instead of just sending a plain
spam to the obtained addresses, they instead run automated checks on the
addresses and the ones that did not pass got send a personalized email with
the security disclosure. Nice.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://sendgrid.com/blog/what-is-dmarc/">https://sendgrid.com/blog/what-is-dmarc/</a></li>
<li><a href="https://easydmarc.com/tools/dmarc-record-generator/">https://easydmarc.com/tools/dmarc-record-generator/</a></li>
<li><a href="https://mxtoolbox.com/SuperTool.aspx">https://mxtoolbox.com/SuperTool.aspx</a></li>
</ul>
Solutions to caffeine starting at ranom2021-10-13T00:00:00+00:002021-10-13T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/solution-to-caffeine-starting-at-random/<p>I have noticed that my
<a href="https://github.com/caffeine-ng/caffeine-ng">caffeine-ng</a> in the tray is
seemingly enabling/activating at random, even though I had virtually
nothing running and I had no apps listed explicitly in the Preferences
dialog:</p>
<p><img src="https://peterbabic.dev/blog/solution-to-caffeine-starting-at-random/caffeine-preferences-empty.png" alt="The caggeine Preferences dialog without any explicit apps" /></p>
<p>After running <code>caffeine</code> from the command line, I could see the following:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>INFO:caffeine.core:Audio playback detected (sd_dummy). Inhibiting.
</span><span>INFO:caffeine.core:GnomeInhibitor is applicable, state: True
</span><span>INFO:caffeine.core:XorgInhibitor is applicable, state: True
</span><span>INFO:caffeine.core:XdgScreenSaverInhibitor is applicable, state: True
</span><span>server does not have extension for -dpms option
</span><span>INFO:caffeine.core:DpmsInhibitor is applicable, state: True
</span><span>INFO:caffeine.core:
</span></code></pre>
<p>The first line might be the culprit of the problem. By searching I've found
that <code>sd_dummy</code> is related to the
<a href="https://archlinux.org/packages/extra/x86_64/speech-dispatcher/">speech-dispatcher</a>
package. On my system it was installed as a dependency of
<a href="https://archlinux.org/packages/extra/any/orca/">orca</a>, which I do not use.</p>
<h2 id="solution-1-user-only-autospawn-disable">Solution 1: User-only autospawn disable</h2>
<p>Create a local configuration directory, in case it is not created yet:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">mkdir -p ~</span><span>/.config/speech-dispatcher
</span></code></pre>
<p>Add the directive there:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#96b5b4;">echo </span><span>"</span><span style="color:#a3be8c;">DisableAutoSpawn</span><span>" >> </span><span style="color:#bf616a;">~</span><span>/.config/speech-dispatcher/speechd.conf
</span></code></pre>
<p>This might be the least invasive solution.</p>
<h2 id="solution-2-system-wide-autospawn-disable">Solution 2: System-wide autospawn disable</h2>
<p>This is the variation of the above, but uncomment the <code>DisableAutoSpawn</code> in
the <code>/etc/speech-dispatcher/speechd.conf</code> or put it there if it is not
present.</p>
<h2 id="solution-3-uninstall-speech-dispatcher">Solution 3: Uninstall speech-dispatcher</h2>
<p>The most straightforward solution is to just remove <code>speech-dispatcher</code>
package from the system:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -Rnc</span><span> speech-dispatcher
</span></code></pre>
<p>The above also took <code>orca</code> with it, but your mileage may vary. Hope it
helps!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://github.com/caffeine-ng/caffeine-ng/issues/39#issuecomment-797306554">https://github.com/caffeine-ng/caffeine-ng/issues/39#issuecomment-797306554</a></li>
<li><a href="https://github.com/caffeine-ng/caffeine-ng/issues/59">https://github.com/caffeine-ng/caffeine-ng/issues/59</a></li>
<li><a href="https://bbs.archlinux.org/viewtopic.php?pid=1689487#p1689487">https://bbs.archlinux.org/viewtopic.php?pid=1689487#p1689487</a></li>
<li><a href="https://aur.archlinux.org/packages/caffeine-ng/Source#comment-796034">https://aur.archlinux.org/packages/caffeine-ng/Source#comment-796034</a></li>
</ul>
Using electronic ID on Arch in Slovakia2021-10-12T00:00:00+00:002021-10-12T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/using-electronic-id-on-arch-in-slovakia/<p>Some months ago I wrote an
<a href="/blog/gnupg-security-token-arrived/#t470-smartcard-interface">article about the smartcard</a>
and shown a possible way to initialize communication with it on Arch Linux
and the notebook equipped with smartcard reader, in my case a trusty T470.</p>
<p>The basis for making the smartcard reader work is to install the required
packages:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -S</span><span> pcsclite ccid
</span></code></pre>
<p>And then enable the <code>pcscd</code> service:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> systemctl enable pcscd.service</span><span style="color:#bf616a;"> --now
</span></code></pre>
<p>So far so good.</p>
<h2 id="id-card-as-a-smartcard">ID card as a smartcard</h2>
<p>Now in Slovakia, the digitization somehow finally picked up some speed and
it turns out, many bureaucracy obstacles are removed by using the
electronic ID or eID as it is called, which is an ID card that has a built
in smartcard for cryptography purposes, for instance to do an offical
electronic signature accepted in administrative tasks in Slovakia.</p>
<p>Task involving electronic signature should also save a considerable amount
of time by not needing to visit the bureau building physically and as a
bonus, most administrative tasks done electronically do not ask for a
processing fee at all! There seems to be no reason not to use it, at least
on paper.</p>
<h2 id="will-it-work-on-linux">Will it work on Linux?</h2>
<p>The sad reality might yet reveal itself and blur my romantic image of never
need to run from one clerk to another to get a stamp for something when I
actually start using the service. But, what surprised me the most is the
fact that there is an actual official software package for Linux for this
feature! I was expecting Windows only, as would be the norm.</p>
<p>The officially supported Linux distributions are Debian, Ubuntu and Mint
however. My first take was to use
<a href="https://github.com/helixarch/debtap">debtap</a> (available in
<a href="https://aur.archlinux.org/packages/debtap/">AUR</a>). After some meddling
with it, another surprise found its way to delight me.</p>
<h2 id="what-about-arch">What about Arch?</h2>
<p>The application I needed is also already available in AUR as
<a href="https://aur.archlinux.org/packages/eidklient/">eidklient</a>. I could not
find an email address of the author, Fedor Piecka, otherwise I would send
him an acknowledgment. The package worked like a charm.</p>
<p>The sad part is that my ID used an older cryptography chip inside, and is
not compatible with the new standards, so I had to go to the bureau to ask
for the newer one anyway. But there is no point in keeping devices relying
on the already broken cryptography around, so this is a good thing in the
long run. I will post an update once I get the new card. Stay tuned!</p>
Upgrading Gitea to 1.152021-10-10T00:00:00+00:002021-10-10T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/upgrading-gitea-to-1-15/<p>The <a href="/blog/cleaning-mastodon-media-attachments/">story from yesterday</a>
actually started by trying to upgrade Gitea from the 1.14 branch which I
was running, precisely 1.14.7 by the time of writing, to the branch 1.15,
or more precisely again, to the version 1.15.4.</p>
<p>I postponed the upgrade to the 1.15 because even though Gitea seems to
follow semantic versioning, or <a href="https://semver.org/">semver</a> for short, in
which a change in the middle number is considered a MINOR upgrade that's
defined as providing new functionality in a backward-compatible manner,
upgrading is never a totally safe operation. Not even when just the third
part of the semver compatible version changes, which should contain only
bug fixes, but the risk might be lower.</p>
<p>The risk might be higher here however, since the community behind Gitea
provides backports of fixes to both branches mentioned above - the 1.14 and
1.15, meaning both branches differ in at least one major way that could
potentially break the setup.</p>
<p>That's why I was reluctant to do the upgrade sooner, as I was not really in
the pressing need for the new features implemented in the 1.15 branch,
although there are definitely a few of them I want to try out. On the other
hand, I needed a stable place to push my commits to during the development
over the previous two months.</p>
<h2 id="upgrading-anything-needs-time-and-backups">Upgrading anything needs time (and backups!)</h2>
<p>Although my VPS provide snapshots, they are not automatic, which is sad.
But even when they are automatic, the period the snapshots are being taken
could be one week. It is enough to get back to business in case of some
really serious hiccup, but it is not enough to prevent most
business-critical data to be preserved.</p>
<p>I prefer 12 or 24 hour incremental backups, for which I either use good old
<code>rysnc</code> or <a href="https://restic.net/">restic</a> most of the time. I decided to do
a tagged <code>restic</code> backup and at the same time, a <code>gitea dump</code> command. I am
running Gitea as a docker container, so I did a dump as well, just to be
safe:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">docker</span><span> exec</span><span style="color:#bf616a;"> -it --user</span><span> git gitea-server-1 /bin/bash
</span></code></pre>
<p>Now inside the container navigate to the <code>gitea</code> executable:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#96b5b4;">cd</span><span> /app/gitea
</span></code></pre>
<p>Take care, the following operation might consume a lot of storage:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">gitea</span><span> dump</span><span style="color:#bf616a;"> --file</span><span> gitea-dump.zip
</span><span style="color:#96b5b4;">exit
</span></code></pre>
<p>Copy the backup outside of the container and remove it from inside:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">docker</span><span> cp gitea-server-1:/app/gitea/gitea-dump.zip .
</span><span style="color:#bf616a;">docker</span><span> exec</span><span style="color:#bf616a;"> -it</span><span> gitea-cp rm /app/gitea/gitea-dump.zip
</span></code></pre>
<p>Now the data should be recoverable if the damage occurs during (or right
after) the upgrade.</p>
<h2 id="upgrading">Upgrading</h2>
<p>Upgrading the Gitea is very easy, in my case it is just changing the
version inside <code>docker-compose.yml</code> and running the following:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> docker-compose up</span><span style="color:#bf616a;"> -d --build
</span></code></pre>
<p>The command ran without any problems and the upgraded Gitea version shown
no problems after a few hours of usage. Do not forget to delete the dumps
before the <code>restic</code> cron job picks them up!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://docs.gitea.io/en-us/backup-and-restore/#using-docker-dump">https://docs.gitea.io/en-us/backup-and-restore/#using-docker-dump</a></li>
<li><a href="https://pages.charlesreid1.com/d-gitea/">https://pages.charlesreid1.com/d-gitea/</a></li>
</ul>
Cleaning mastodon media attachments2021-10-09T00:00:00+00:002021-10-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/cleaning-mastodon-media-attachments/<p>Being too occupied with the work for previous two months left me with
absolutely no time for server maintenance. Especially the kind of
maintenance that needs some thought or research before being being done
properly.</p>
<p>One of the things that got neglected this way is the VPS free space
monitoring. In case some application eats up too much space, it can lead to
problems. I mean, with most VPS providers, an additional space is usually
just a clicks away, but unless it is a business-critical decision, it is
better avoided. Especially when the need for a most spacious storage is due
to some data that are potentially worthless, i.e. cache data.</p>
<h2 id="checking-the-server">Checking the server</h2>
<p>With some space time today I checked the server where my
<a href="https://mastodon.peterbabic.dev/">Mastodon instance</a> is residing and found
out that the mastodon folder is taking a whopping 92 GB of storage! I have
almost fell down from the chair. I have set it up just 4 months earlier.
How could it grow so large? I have not even been using it so much since.
When I was running Pleroma before, it did manage to grow to around 16 GB
over eight months, so clearly there was something strange happening.</p>
<p>Checking the bulk of the size led me to the folder
<code>public/system/cache/attachments</code> with the size of 65 GB. Learning more I
was led to believe that the server caches all the federated media all the
users (in this case just me) on the instance follow. It looks like the
server keeps these media attachments as a cache indefinitely, which cause
this problem.</p>
<h2 id="tootctl-to-the-rescue">Tootctl to the rescue!</h2>
<p>The simplest solution I have found is to simply remove the attachments via
the inbuilt <code>tootctl</code> command:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">tootctl</span><span> media remove
</span></code></pre>
<p>To prevent re-downloading every media attachment, making the user
experience snappier and saving some network bandwidth, we can prune only
media attachments older than, let's say a week:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">tootctl</span><span> media remove</span><span style="color:#bf616a;"> --days</span><span>=7
</span></code></pre>
<p>I am running my Mastodon instance as a Docker container, so the command
need a little more tweaking:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">docker</span><span> exec</span><span style="color:#bf616a;"> -it</span><span> mastodon_web_1 tootctl media remove</span><span style="color:#bf616a;"> --days</span><span>=7
</span></code></pre>
<p>On your setup, the <code>mastodon_web_1</code> could be something different, consult
<code>docker ps</code> for instance. The command above freed up over 60 GB on my
machine without any obvious errors.</p>
<h2 id="bonus-space">Bonus space</h2>
<p>There is more to prune potentially safely with <code>tootctl</code>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">docker</span><span> exec</span><span style="color:#bf616a;"> -it</span><span> mastodon_web_1 tootctl preview_cards remove</span><span style="color:#bf616a;"> --days</span><span>=7
</span></code></pre>
<p>Although the savings here were pretty tiny, just over 1 GB. Note that both
these tasks could be set-up as a cron job, one potential example is in the
links below.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://gist.github.com/ThomasLeister/aa1c500eedeff19551f3bb8238533854">https://gist.github.com/ThomasLeister/aa1c500eedeff19551f3bb8238533854</a></li>
<li><a href="https://github.com/mastodon/mastodon/issues/9567">https://github.com/mastodon/mastodon/issues/9567</a></li>
<li><a href="https://docs.joinmastodon.org/admin/tootctl/#media">https://docs.joinmastodon.org/admin/tootctl/#media</a></li>
</ul>
How to do polling in Svelte and InertiaJS2021-10-04T00:00:00+00:002021-10-04T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/how-to-do-polling-in-svelte/<p>Just a quick snippet about how to do polling the Svelte way.</p>
<pre data-lang="js" style="background-color:#2b303b;color:#c0c5ce;" class="language-js "><code class="language-js" data-lang="js"><span style="color:#b48ead;">import </span><span>{ </span><span style="color:#bf616a;">onDestroy </span><span>} </span><span style="color:#b48ead;">from </span><span>"</span><span style="color:#a3be8c;">svelte</span><span>"
</span><span>
</span><span style="color:#b48ead;">let </span><span style="color:#bf616a;">interval
</span><span>
</span><span style="color:#b48ead;">const </span><span style="color:#8fa1b3;">poll </span><span>= () </span><span style="color:#b48ead;">=> </span><span>{
</span><span> </span><span style="color:#96b5b4;">clearTimeout</span><span>(</span><span style="color:#bf616a;">interval</span><span>)
</span><span> </span><span style="color:#bf616a;">Inertia</span><span>.</span><span style="color:#96b5b4;">reload</span><span>()
</span><span> </span><span style="color:#bf616a;">interval </span><span>= </span><span style="color:#96b5b4;">setTimeout</span><span>(</span><span style="color:#bf616a;">poll</span><span>, </span><span style="color:#d08770;">1000</span><span>)
</span><span>}
</span><span>
</span><span style="color:#8fa1b3;">poll</span><span>()
</span><span>
</span><span style="color:#8fa1b3;">onDestroy</span><span>(() </span><span style="color:#b48ead;">=> </span><span style="color:#96b5b4;">clearTimeout</span><span>(</span><span style="color:#bf616a;">interval</span><span>))
</span></code></pre>
<p>I mean, it could be done differently but the basic idea persists across
javascript ecosystem. And the idea is to make sure calls to your function
do not compound. This is sort of trap for the young players.</p>
<p>I have been doing polling in Node as well, because the industrial procotol
Modbus does it the polling way and it took me a while to understand how to
do it right.</p>
Test the app with real data quickly2021-10-01T00:00:00+00:002021-10-01T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/test-the-app-with-real-data-quicky/<p>Hopefully I got back to writing again. I could not find time for anything
else than the project I tried too hard to finish. This day hopefully marks
the sweet little feeling described as <em>project is finished</em> that can be
tasted for a brief while until engaged in the next project. The <em>project is
finished</em> feeling is arguably one of the few emotions a programmer is
capable of expressing to the outside world. Just kidding. I can definitely
laugh as well, especially in the most inappropriate situations.</p>
<p>Every project surfaces different unforeseen problems. Some problems are
with the tech stack chosen and some are due to errors in data. With any
data manipulation application, the data must be first loaded into the
memory. To do so, the data structures have to be created to accommodate the
data. This works well until it doesn't, which mean, until there is a
discrepancy with the data. Or, in other words, when the data structure does
not meet the expectations.</p>
<p>In my situation, one small excess relation among the entities involved led
to so much redesigning and rewriting that I am really glad it is over. When
the customer confirmed for themselves that the data structure is in fact
part of their database, they were surprised in a way resembling "this
should not even be possible". What's worse, now when they know about the
problem, they are likely already working on changing the data to meet the
expected criteria. Such change would remove the data entity relation
causing the delays in the first place, effectively rendering all
implemented features aimed to work around the discrepancy meaningless.</p>
<p>In the end, it is all my fault I did not examine the data thoroughly at the
beginning, but what is the right thing to do? Obsessing over the data and
procrastinating with the product? I did a brief data examination and went
on to coding.The project was meant as a proof-of-concept (PoC), I could
found that the project's main goal could not be done imagined way,
rendering the error in the data irrelevant far before it would be found. I
counted on my ability to mold the code to import the provided data at one
point in the project development cycle, at which point exactly I found the
problem. Which was after the PoC was definitely confirmed but after some
code had to be thrown away.</p>
<p>We are now spoiled with the rich features for fake data generation that
allows us to pre-fill the application with diverse data to test as much
edge cases as possible before going live. Yet apparently there is a flip
side, because the generated data are very static, there are generally no
unexpected relation amoung its entities. The final take from this story is
to not to count on too much on the data generated via Faker or similar tool
and try to use the real data as soon as possible. Might prevent some
headache.</p>
Testing svelte-dnd-action with Cypress2021-08-17T00:00:00+00:002021-08-17T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/testing-svelte-dnd-action-with-cypress/<p>Using Drag & Drop is probably easier than ever due to virtually unlimited
supply of new front-end libraries appearing every day. Making Drag & Drop
for a sortable trello-like boards or for file uploads work in the browser
is thus becoming a matter of importing a module and writing at most a few
lines, maybe sprinkling a configuration option here and there.</p>
<p>Doing automated testing on the stuff however is still not kind of there
yet, I would say, as I have spent a day trying to figure out how to write a
Cypress test snippet for the Drag & Drop functionality offered by the
awesome
<a href="https://github.com/isaacHagoel/svelte-dnd-action">svelte-dnd-action</a>
library.</p>
<p>After sieving through dozens of StackOverflow posts and even trying every
solution in
<a href="https://stackoverflow.com/a/55320650/1972509">this lengthy thread</a> I have
finally found a possible way forward in
<a href="https://stackoverflow.com/a/55320650/1972509">this SO answer</a>. The asnwer
does not provide the full solution, but instead a hint to trigger the
<code>mousemove</code> even twice in a row. Here a Cypress test snippet that works for
me:</p>
<pre data-lang="javascript" style="background-color:#2b303b;color:#c0c5ce;" class="language-javascript "><code class="language-javascript" data-lang="javascript"><span style="color:#b48ead;">const </span><span style="color:#bf616a;">clientX </span><span>= </span><span style="color:#d08770;">300
</span><span style="color:#b48ead;">const </span><span style="color:#bf616a;">clientY </span><span>= </span><span style="color:#d08770;">500
</span><span style="color:#b48ead;">const </span><span style="color:#bf616a;">force </span><span>= </span><span style="color:#d08770;">true
</span><span>
</span><span style="color:#bf616a;">cy</span><span>.</span><span style="color:#96b5b4;">get</span><span>("</span><span style="color:#a3be8c;">[data-cy=draggable]</span><span>")
</span><span> .</span><span style="color:#8fa1b3;">trigger</span><span>("</span><span style="color:#a3be8c;">mousedown</span><span>")
</span><span> .</span><span style="color:#8fa1b3;">trigger</span><span>("</span><span style="color:#a3be8c;">mousemove</span><span>", { </span><span style="color:#bf616a;">clientX</span><span>, </span><span style="color:#bf616a;">clientY</span><span>, </span><span style="color:#bf616a;">force </span><span>})
</span><span> .</span><span style="color:#8fa1b3;">trigger</span><span>("</span><span style="color:#a3be8c;">mousemove</span><span>", { </span><span style="color:#bf616a;">clientX</span><span>, </span><span style="color:#bf616a;">clientY</span><span>, </span><span style="color:#bf616a;">force </span><span>})
</span><span> .</span><span style="color:#8fa1b3;">wait</span><span>(</span><span style="color:#d08770;">1</span><span>)
</span><span> .</span><span style="color:#8fa1b3;">trigger</span><span>("</span><span style="color:#a3be8c;">mouseup</span><span>", { </span><span style="color:#bf616a;">force </span><span>})
</span></code></pre>
<p>Also it did not work for me if I omitted that <code>wait(1)</code> for the reasons I
definitelo do not comprehend right now. Also, make sure to not use the
Svelte version <code>3.38.3</code> for this, due to a
<a href="https://github.com/isaacHagoel/svelte-dnd-action/issues/304#issuecomment-881090814">bug</a>.</p>
Test preserveScroll in InertiaJS with Cypress2021-08-16T00:00:00+00:002021-08-16T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/test-preservescroll-in-inertiajs-with-cypress/<p>A short snippet that makes it possible to test if
<a href="https://inertiajs.com/scroll-management#scroll-preservation">preserveScroll feature</a>
is enabled in InertiaJS. Can be used as a part of Test-Driven Development
process (TDD). The snippet can probably be adapted for other scroll related
tests, but is especially geared towards the InertiaJS feature called
<a href="https://inertiajs.com/scroll-management#scroll-regions">Scroll Regions</a>
like this:</p>
<pre data-lang="html" style="background-color:#2b303b;color:#c0c5ce;" class="language-html "><code class="language-html" data-lang="html"><span><</span><span style="color:#bf616a;">div </span><span style="color:#d08770;">class</span><span>="</span><span style="color:#a3be8c;">overflow-y-auto</span><span>" </span><span style="color:#d08770;">scroll-region</span><span>>
</span><span> </span><span style="color:#65737e;"><!-- Your page content -->
</span><span> </span><span style="color:#65737e;"><!-- ... -->
</span><span> <</span><span style="color:#bf616a;">div </span><span style="color:#d08770;">data-cy</span><span>="</span><span style="color:#a3be8c;">an-element-below</span><span>" />
</span><span></</span><span style="color:#bf616a;">div</span><span>>
</span></code></pre>
<p>Now create a scroll preserving HTML link in InertiaJS or create an
InertiaJS request with the same property in Javascript:</p>
<pre data-lang="javascript" style="background-color:#2b303b;color:#c0c5ce;" class="language-javascript "><code class="language-javascript" data-lang="javascript"><span style="color:#bf616a;">Inertia</span><span>.</span><span style="color:#96b5b4;">get</span><span>(</span><span style="color:#8fa1b3;">route</span><span>("</span><span style="color:#a3be8c;">post.show</span><span>"), { </span><span style="color:#bf616a;">post </span><span>}, { preserveScroll: </span><span style="color:#d08770;">true </span><span>})
</span></code></pre>
<p>Defining a div with a <code>scroll-region</code> attribute on it makes it impossible
to use the common method of testing the <code>window.scrollY</code> impossible as it
will always report 0. Instead, the <code>scrollTop</code> on the element itself should
be observed like this:</p>
<pre data-lang="javascript" style="background-color:#2b303b;color:#c0c5ce;" class="language-javascript "><code class="language-javascript" data-lang="javascript"><span style="color:#bf616a;">cy</span><span>.</span><span style="color:#96b5b4;">get</span><span>("</span><span style="color:#a3be8c;">[data-cy=an-element-below]</span><span>").</span><span style="color:#96b5b4;">scrollIntoView</span><span>().</span><span style="color:#8fa1b3;">should</span><span>("</span><span style="color:#a3be8c;">be.visible</span><span>")
</span><span style="color:#bf616a;">cy</span><span>.</span><span style="color:#96b5b4;">get</span><span>("</span><span style="color:#a3be8c;">[scroll-region]</span><span>").</span><span style="color:#8fa1b3;">invoke</span><span>("</span><span style="color:#a3be8c;">scrollTop</span><span>").</span><span style="color:#8fa1b3;">should</span><span>("</span><span style="color:#a3be8c;">not.eq</span><span>", </span><span style="color:#d08770;">0</span><span>)
</span></code></pre>
<p>The test like this makes sure that if you forgot or accidentally remove the
<code>preserveScroll: true</code> from your templates, Cypress will let you know.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://github.com/inertiajs/inertia/blob/dd5902978fa457fd4f065812c747c9c743fcafc8/packages/inertia-vue/tests/cypress/integration/manual-visits.test.js#L835">https://github.com/inertiajs/inertia/blob/dd5902978fa457fd4f065812c747c9c743fcafc8/packages/inertia-vue/tests/cypress/integration/manual-visits.test.js#L835</a></li>
<li><a href="https://github.com/inertiajs/inertia/blob/dd5902978fa457fd4f065812c747c9c743fcafc8/packages/inertia-vue/tests/app/Layouts/WithScrollRegion.vue#L29">https://github.com/inertiajs/inertia/blob/dd5902978fa457fd4f065812c747c9c743fcafc8/packages/inertia-vue/tests/app/Layouts/WithScrollRegion.vue#L29</a></li>
</ul>
Test if a command was scheduled in Laravel 82021-08-14T00:00:00+00:002021-08-14T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/test-if-command-was-scheduled-in-laravel-8/<p>Testing if the command was actually scheduled might be a controversial
question. Should we test if the scheduler is working and running tasks on
time? Obviously no, it provided via framework and we can even see that
<a href="https://github.com/laravel/framework/blob/277c2fbd0cebd2cb194807654d870f4040e288c0/tests/Console/ConsoleEventSchedulerTest.php">the tests are present</a>
and
<a href="https://github.com/laravel/framework/blob/277c2fbd0cebd2cb194807654d870f4040e288c0/tests/Integration/Console/ConsoleApplicationTest.php">integrate together</a>.</p>
<p>But there is a part of the app that could be tested and it is if the
command we expect is being placed into the scheduler. I have found a
<a href="https://stackoverflow.com/a/45813748/1972509">nice solution</a> and modified
it for Laravel 8:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span style="color:#b48ead;">use </span><span>Illuminate\Console\Scheduling\</span><span style="color:#ebcb8b;">Schedule</span><span>;
</span><span style="color:#b48ead;">use </span><span>Illuminate\Console\Scheduling\</span><span style="color:#ebcb8b;">Event</span><span>;
</span><span>
</span><span style="color:#b48ead;">public function </span><span style="color:#8fa1b3;">testIsAvailableInTheScheduler</span><span>()
</span><span>{
</span><span> $</span><span style="color:#bf616a;">schedule </span><span>= </span><span style="color:#bf616a;">app</span><span>()-></span><span style="color:#bf616a;">make</span><span>(</span><span style="color:#ebcb8b;">Schedule</span><span>::</span><span style="color:#d08770;">class</span><span>);
</span><span>
</span><span> $</span><span style="color:#bf616a;">events </span><span>= </span><span style="color:#bf616a;">collect</span><span>($</span><span style="color:#bf616a;">schedule</span><span>-></span><span style="color:#bf616a;">events</span><span>())-></span><span style="color:#bf616a;">filter</span><span>(</span><span style="color:#b48ead;">function </span><span>(</span><span style="color:#ebcb8b;">Event </span><span>$</span><span style="color:#bf616a;">event</span><span>) {
</span><span> </span><span style="color:#b48ead;">return </span><span style="color:#96b5b4;">stripos</span><span>($</span><span style="color:#bf616a;">event</span><span>-></span><span style="color:#bf616a;">description</span><span>, '</span><span style="color:#a3be8c;">PruneLogs</span><span>');
</span><span> });
</span><span>
</span><span> </span><span style="color:#b48ead;">if </span><span>($</span><span style="color:#bf616a;">events</span><span>-></span><span style="color:#bf616a;">count</span><span>() == </span><span style="color:#d08770;">0</span><span>) {
</span><span> $</span><span style="color:#bf616a;">this</span><span>-></span><span style="color:#bf616a;">fail</span><span>('</span><span style="color:#a3be8c;">No events found</span><span>');
</span><span> }
</span><span>
</span><span> $</span><span style="color:#bf616a;">events</span><span>-></span><span style="color:#bf616a;">each</span><span>(</span><span style="color:#b48ead;">function </span><span>(</span><span style="color:#ebcb8b;">Event </span><span>$</span><span style="color:#bf616a;">event</span><span>) {
</span><span> </span><span style="color:#65737e;">// Every minute
</span><span> $</span><span style="color:#bf616a;">this</span><span>-></span><span style="color:#bf616a;">assertEquals</span><span>('</span><span style="color:#a3be8c;">* * * * *</span><span>', $</span><span style="color:#bf616a;">event</span><span>-></span><span style="color:#bf616a;">expression</span><span>);
</span><span> });
</span><span>}
</span></code></pre>
<p>We also need to actually
<a href="https://laravel.com/docs/8.x/scheduling#defining-schedules">define</a> the
scheduled command in the <code>App\Console\Kernel</code> class:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span style="color:#b48ead;">protected function </span><span style="color:#8fa1b3;">schedule</span><span>(</span><span style="color:#ebcb8b;">Schedule </span><span>$</span><span style="color:#bf616a;">schedule</span><span>) {
</span><span> $</span><span style="color:#bf616a;">schedule</span><span>-></span><span style="color:#bf616a;">call</span><span>(</span><span style="color:#b48ead;">new </span><span style="color:#ebcb8b;">PruneLogs</span><span>)-></span><span style="color:#bf616a;">everyMinute</span><span>();
</span><span>}
</span></code></pre>
<p>For instance, the above will call the
<a href="https://secure.php.net/manual/en/language.oop5.magic.php#object.invoke">invokable object</a>
named <code>PruneLogs</code> every minute, making the above test pass. Jobs can be
used the same way, simply by replacing <code>call</code> with <code>job</code>.</p>
<p>I have found the solution via Tinker, the command property was <code>null</code>, but
the description what what I was after:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>>>> app()->make(\Illuminate\Console\Scheduling\Schedule::class)->events();
</span><span>=> [
</span><span> Illuminate\Console\Scheduling\CallbackEvent {#3496
</span><span> +command: null,
</span><span> +expression: "* * * * *",
</span><span> +timezone: "UTC",
</span><span> +user: null,
</span><span> +environments: [],
</span><span> +evenInMaintenanceMode: false,
</span><span> +withoutOverlapping: false,
</span><span> +onOneServer: false,
</span><span> +expiresAt: 1440,
</span><span> +runInBackground: false,
</span><span> +output: "/dev/null",
</span><span> +shouldAppendOutput: false,
</span><span> +description: "App\Jobs\GenerateSuggestion",
</span><span> +mutex: Illuminate\Console\Scheduling\CacheEventMutex {#3498
</span><span> +cache: Illuminate\Cache\CacheManager {#282},
</span><span> +store: null,
</span><span> },
</span><span> +exitCode: null,
</span><span> },
</span><span> ]
</span></code></pre>
<p>Hope that helps!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://stackoverflow.com/a/68741194/1972509">https://stackoverflow.com/a/68741194/1972509</a></li>
</ul>
Dispatching jobs via commands in Laravel 82021-08-11T00:00:00+00:002021-08-11T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/dispatching-jobs-via-command-in-laravel-8/<p>What I've found myself usually doing when working with Queues in Laravel is
a need to run jobs manually. Especially the jobs that have no arguments,
usually some maintenance tasks like pruning excess logs generating a
suggestions for user. These kind of jobs are a great candidates to be
<a href="https://laravel.com/docs/8.x/scheduling#defining-schedules">scheduled</a> as
they are more abstract and generally do not immediately affect the user
experience. This is a contrast to jobs that do processing, for instance
when a user uploads a photo, they would like it appear in the application
as soon as possible, ideally without any delay. If the photo appears in the
app 10 minutes later, the user might be long gone, never to return back.</p>
<h2 id="make-a-job">Make a job</h2>
<p>Job boilerplate can be
<a href="https://laravel.com/docs/8.x/queues#generating-job-classes">generated</a>
using artisan:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">php</span><span> artisan make:job RefillFridge
</span></code></pre>
<p>Populate the <code>RefillFridge</code> job class with required commands as needed. For
a very quick and dirty testing I sometimes just create a route closure like
this:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span style="color:#b48ead;">use </span><span>App\Jobs\</span><span style="color:#ebcb8b;">RefillFridge</span><span>;
</span><span>
</span><span style="color:#ebcb8b;">Route</span><span>::</span><span style="color:#bf616a;">get</span><span>('</span><span style="color:#a3be8c;">refill</span><span>', </span><span style="color:#b48ead;">function </span><span>() {
</span><span> </span><span style="color:#bf616a;">dispatch</span><span>(</span><span style="color:#b48ead;">new </span><span style="color:#ebcb8b;">RefillFridge</span><span>());
</span><span>});
</span></code></pre>
<p>This is tempting, especially since it is super easy to just pres F5 to
refresh the GET route in the browser to get the job dispatched. But routes
obviously are not meant to be abused in this way.</p>
<h2 id="use-tinker">Use Tinker</h2>
<p>Another way to get the Job done without any additional code is to use
Tinker.</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>php artisan tinker --ansi
</span><span>>>> dispatch(new App\Jobs\RefillFridge)
</span><span>=> Illuminate\Foundation\Bus\PendingDispatch {#3502}
</span></code></pre>
<p>This could be enough for many people, but still requires some writing to
do. Tinker has a history that can be reverse-searched via CTRL-R, the same
way as in Bash or zsh for instance, but it get's cleared sometimes, and I
did not figure out when exactly it does it yet, so it is good to keep this
in mind.</p>
<h2 id="making-a-command">Making a command</h2>
<p>Command boilerplate can be
<a href="https://laravel.com/docs/8.x/artisan#generating-commands">generated</a> via
artisan as well:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">php</span><span> artisan make:command RefillFridge
</span></code></pre>
<p>Now dispatch the <code>RefillFridge</code> job from the <code>RefillFridge</code> command. This a
little tricky. Since both classes are called the same, although under
different namespaces, we either need to provide absolute paths or an alias
via <code>as</code> keyword. I prefer the alias method as it keeps repetition low and
makes it more obvious glancing at the top of the file what the class
interacts with. The minimal code is below:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span style="color:#b48ead;">namespace </span><span>App\Console\Commands;
</span><span>
</span><span style="color:#b48ead;">use </span><span>App\Jobs\</span><span style="color:#ebcb8b;">RefillFridge </span><span style="color:#b48ead;">as </span><span style="color:#ebcb8b;">RefillFridgeJob</span><span>;
</span><span style="color:#b48ead;">use </span><span>Illuminate\Console\</span><span style="color:#ebcb8b;">Command</span><span>;
</span><span style="color:#b48ead;">use </span><span>Illuminate\Contracts\Bus\</span><span style="color:#ebcb8b;">Dispatcher</span><span>;
</span><span>
</span><span style="color:#b48ead;">class </span><span style="color:#ebcb8b;">GeneratePlan </span><span style="color:#b48ead;">extends </span><span style="color:#a3be8c;">Command </span><span style="color:#eff1f5;">{
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">protected </span><span>$</span><span style="color:#bf616a;">signature </span><span>= '</span><span style="color:#a3be8c;">refill:fridge</span><span>'</span><span style="color:#eff1f5;">;
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">protected </span><span>$</span><span style="color:#bf616a;">description </span><span>= '</span><span style="color:#a3be8c;">Puts fresh food into the fridge</span><span>'</span><span style="color:#eff1f5;">;
</span><span style="color:#eff1f5;">
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">public function </span><span style="color:#8fa1b3;">handle</span><span style="color:#eff1f5;">() {
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">return </span><span style="color:#bf616a;">app</span><span style="color:#eff1f5;">(</span><span style="color:#ebcb8b;">Dispatcher</span><span style="color:#eff1f5;">::</span><span style="color:#d08770;">class</span><span style="color:#eff1f5;">)-></span><span style="color:#bf616a;">dispatch</span><span style="color:#eff1f5;">(</span><span style="color:#b48ead;">new </span><span style="color:#ebcb8b;">RefillFridgeJob</span><span style="color:#eff1f5;">());
</span><span style="color:#eff1f5;"> }
</span><span style="color:#eff1f5;">}
</span></code></pre>
<p>As opposed to previous methods, we also need an access to the app's
<code>Dispatcher</code> in the commands. This will use the default queue and the
default connection. Run the command as usual:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">php</span><span> artisan refill:fridge
</span></code></pre>
<p>The queue worker should pick this up in a moments.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://github.com/laravel/tinker/issues/30#issuecomment-692332900">https://github.com/laravel/tinker/issues/30#issuecomment-692332900</a></li>
</ul>
Using keys with reduce in Laravel2021-08-08T00:00:00+00:002021-08-08T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/using-keys-with-reduce-in-laravel/<p>There are some quite important functions that are being very commonly used
to transform data, even across languages, that modern approaches to solving
problems greatly prefer. Many things in theory could fit such definition,
but right now I am talking about map, reduce or even filter functions, all
of which are being increasingly preferred to plain while, for and foreach
loops, wherever applicable. Of course Laravel offers it's flavor of these
functions that work on data in Collections. I will not detail on how to use
them as the official documentation for
<a href="https://laravel.com/docs/8.x/collections#method-filter">filter</a>,
<a href="https://laravel.com/docs/8.x/collections#method-map">map</a>, and
<a href="https://laravel.com/docs/8.x/collections#method-reduce">reduce</a>
respectively is detailed enough. Instead I want to focus here on a small
bit that is omitted in the docs. Using keys with reduce.</p>
<h2 id="reduce">Reduce</h2>
<p>From the official docs mentioned above, the example for <code>reduce</code> looks like
this:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span>$</span><span style="color:#bf616a;">collection </span><span>= </span><span style="color:#bf616a;">collect</span><span>([</span><span style="color:#d08770;">1</span><span>, </span><span style="color:#d08770;">2</span><span>, </span><span style="color:#d08770;">3</span><span>]);
</span><span>
</span><span>$</span><span style="color:#bf616a;">total </span><span>= $</span><span style="color:#bf616a;">collection</span><span>-></span><span style="color:#bf616a;">reduce</span><span>(</span><span style="color:#b48ead;">function </span><span>($</span><span style="color:#bf616a;">carry</span><span>, $</span><span style="color:#bf616a;">item</span><span>) {
</span><span> </span><span style="color:#b48ead;">return </span><span>$</span><span style="color:#bf616a;">carry </span><span>+ $</span><span style="color:#bf616a;">item</span><span>;
</span><span>});
</span><span>
</span><span style="color:#65737e;">// 6
</span></code></pre>
<p>And in fact, doing a sum of values is one of the most used example for
reduce usage there is. Basically the book example. You will probably find
similar examples for other languages too.</p>
<h2 id="reduce-with-an-arrow-function">Reduce with an arrow function</h2>
<p>For the sake of improvement, let's rewrite the above using arrow function,
a feature that had been added into PHP as a part of anonymous functions.
They are available in javascript as well and I love using them there, so
let's try:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span>$</span><span style="color:#bf616a;">collection </span><span>= </span><span style="color:#bf616a;">collect</span><span>([</span><span style="color:#d08770;">1</span><span>, </span><span style="color:#d08770;">2</span><span>, </span><span style="color:#d08770;">3</span><span>]);
</span><span>
</span><span>$</span><span style="color:#bf616a;">total </span><span>= $</span><span style="color:#bf616a;">collection</span><span>-></span><span style="color:#bf616a;">reduce</span><span>(</span><span style="color:#b48ead;">fn </span><span>($</span><span style="color:#bf616a;">carry</span><span>, $</span><span style="color:#bf616a;">item</span><span>) =>
</span><span> $</span><span style="color:#bf616a;">carry </span><span>+ $</span><span style="color:#bf616a;">item</span><span>;
</span><span>);
</span><span>
</span><span style="color:#65737e;">// 6
</span></code></pre>
<p>Saves a few keystrokes, too. By now, it should be clear even to the young
Padawans that the <code>reduce</code> function for it's first argument accepts a
callback function, that has two arguments, a <code>$carry</code> and the actual
<code>$item</code> being iterated, many times referred to as a value. If we really
just want to do a sum of values, this is all we need. What about situations
where it is not enough?</p>
<h2 id="reduce-with-keys">Reduce with keys</h2>
<p>Imagine we have a Collection of cities and we want to use reduce to
calculate the total distance between them using <code>reduce</code> alone. This is
little bit tricky because the distance is a relation between two cities so
we have to have away to access it within a callback. Not being able to find
reliably the documentation for this, I decided to quickly write it down, so
here it is:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span>$</span><span style="color:#bf616a;">cities </span><span>= </span><span style="color:#ebcb8b;">City</span><span>::</span><span style="color:#bf616a;">all</span><span>();
</span><span>
</span><span>$</span><span style="color:#bf616a;">total </span><span>= $</span><span style="color:#bf616a;">cities</span><span>-></span><span style="color:#bf616a;">reduce</span><span>(</span><span style="color:#b48ead;">function </span><span>($</span><span style="color:#bf616a;">carry</span><span>, $</span><span style="color:#bf616a;">city</span><span>, $</span><span style="color:#bf616a;">key</span><span>) </span><span style="color:#b48ead;">use </span><span>($</span><span style="color:#bf616a;">cities</span><span>) {
</span><span> $</span><span style="color:#bf616a;">next </span><span>= $</span><span style="color:#bf616a;">key </span><span>+ </span><span style="color:#d08770;">1</span><span>;
</span><span>
</span><span> </span><span style="color:#b48ead;">if </span><span>(</span><span style="color:#96b5b4;">isset</span><span>($</span><span style="color:#bf616a;">cities</span><span>[$</span><span style="color:#bf616a;">next</span><span>]) {
</span><span> $</span><span style="color:#bf616a;">carry </span><span>+= $</span><span style="color:#bf616a;">city</span><span>-></span><span style="color:#bf616a;">distanceTo</span><span>($</span><span style="color:#bf616a;">cities</span><span>[$</span><span style="color:#bf616a;">next</span><span>]);
</span><span> }
</span><span>
</span><span> </span><span style="color:#b48ead;">return </span><span>$</span><span style="color:#bf616a;">carry</span><span>;
</span><span>});
</span></code></pre>
<p>The most important bit here is that the callback can actually have more
than two parameters, the third one is being the <code>$key</code>. We also have to
make <code>$cities</code> available in the local scope via <code>use</code> keyword and need to
check if the end of the array was not reached beforehand.</p>
<h2 id="show-me-them-arrows">Show me them arrows</h2>
<p>Arrow functions in javascript can have many statements. In PHP, only a
single assignment per arrow function is permitted. Rewriting the above with
an arrow function is trickier, but possible.</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span>$</span><span style="color:#bf616a;">cities </span><span>= </span><span style="color:#ebcb8b;">City</span><span>::</span><span style="color:#bf616a;">all</span><span>();
</span><span>
</span><span>$</span><span style="color:#bf616a;">total </span><span>= $</span><span style="color:#bf616a;">cities</span><span>-></span><span style="color:#bf616a;">reduce</span><span>(</span><span style="color:#b48ead;">fn </span><span>($</span><span style="color:#bf616a;">carry</span><span>, $</span><span style="color:#bf616a;">city</span><span>, $</span><span style="color:#bf616a;">key</span><span>) =>
</span><span> </span><span style="color:#96b5b4;">isset</span><span>($</span><span style="color:#bf616a;">cities</span><span>[$</span><span style="color:#bf616a;">key </span><span>+ </span><span style="color:#d08770;">1</span><span>])
</span><span> ? $</span><span style="color:#bf616a;">carry </span><span>+= $</span><span style="color:#bf616a;">city</span><span>-></span><span style="color:#bf616a;">distanceTo</span><span>($</span><span style="color:#bf616a;">cities</span><span>[$</span><span style="color:#bf616a;">key </span><span>+ </span><span style="color:#d08770;">1</span><span>])
</span><span> : $</span><span style="color:#bf616a;">carry
</span><span>);
</span></code></pre>
<p>A ternary operator is used here. It is up to the reader to judge if this is
an improvement or a hit to the readability. Also, there seems to exist too
many ways people prefer to see the above code formatted, so it might even
look scary or ugly to some. With arrow function however, outside variables
are available in the local scope. Thus <code>$cities</code> are available without the
need for the <code>use</code> keyword.</p>
<h2 id="better-to-split-up">Better to split up</h2>
<p>Using keys with <code>reduce</code> function, a part of Laravel Collections can be
useful in some situations. Documentation does not explicitly mention the
third parameter for the callback function and since, as demonstrated above,
the code that makes use of it is not that elegant, maybe it is omitted for
a good reason.</p>
<p>Is there another way? Well, as with anything programming related, the
answer is yes. The <code>$key</code> is actually just the second attribute to the
<code>map</code> function, and mentioned in the docs, go check it. In many situations
using <code>map</code> with the keys in the similar fashion as above is better, as it
would enable us running <code>reduce</code> on the mapped values, for example
distances. It requires two functions instead of single concise one, but the
resulting code might be more explicit. See below:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span>$</span><span style="color:#bf616a;">cities </span><span>= </span><span style="color:#ebcb8b;">City</span><span>::</span><span style="color:#bf616a;">all</span><span>();
</span><span>
</span><span>$</span><span style="color:#bf616a;">distances </span><span>= $</span><span style="color:#bf616a;">cities</span><span>-></span><span style="color:#bf616a;">map</span><span>(</span><span style="color:#b48ead;">fn </span><span>($</span><span style="color:#bf616a;">city</span><span>, $</span><span style="color:#bf616a;">key</span><span>) =>
</span><span> </span><span style="color:#96b5b4;">isset</span><span>($</span><span style="color:#bf616a;">cities</span><span>[$</span><span style="color:#bf616a;">key </span><span>+ </span><span style="color:#d08770;">1</span><span>])
</span><span> ? $</span><span style="color:#bf616a;">city</span><span>-></span><span style="color:#bf616a;">distanceTo</span><span>($</span><span style="color:#bf616a;">cities</span><span>[$</span><span style="color:#bf616a;">key </span><span>+ </span><span style="color:#d08770;">1</span><span>])
</span><span> : </span><span style="color:#d08770;">0
</span><span>);
</span><span>
</span><span>$</span><span style="color:#bf616a;">total </span><span>= $</span><span style="color:#bf616a;">distances</span><span>-></span><span style="color:#bf616a;">reduce</span><span>(</span><span style="color:#b48ead;">fn </span><span>($</span><span style="color:#bf616a;">carry</span><span>, $</span><span style="color:#bf616a;">distance</span><span>) =>
</span><span> $</span><span style="color:#bf616a;">carry </span><span>+= $</span><span style="color:#bf616a;">distance
</span><span>);
</span></code></pre>
<p>This is how I like to write it with my current style. What would be your
preferred way?</p>
A short summer writing pause2021-08-06T00:00:00+00:002021-08-06T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/a-short-summer-writing-pause/<p>Past week I've had a few days that I focused solely on work and family and
did not reserve any time for the blog. There were multiple reasons for
this, but the main was that it is summer and there are almost no
restrictions whatsoever. I did some camping by the lake I was driving
around many times but never had the opportunity to enjoy it. There was a
river delta forming a place resembling beach, although the water flowing in
is from the mountains, so despite the high air temperature, the water was
absolutely refreshing, an absolute contrast to the standing water lakes
where I currently live.</p>
<p>Another reason for no post was that what I am doing right (Laravel) is many
times documented somewhere reliably, so I did not bother to state the
obvious. I am also on tight schedule with this project, so definitely no
time to lose. Wish me luck!</p>
Prevent push when skipping Cypress tests pt.22021-07-28T00:00:00+00:002021-07-28T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/prevent-push-when-skipping-cypress-tests-pt-2/<p>Just a quick update to
<a href="/blog/prevent-push-when-skipping-cypress-tests/">the article I wrote some time ago</a>
that could be considered as part one on this topic. The problem outlined in
the article is basic. When developing Cypress tests, it is helpful to use
the <code>.only()</code>,
<a href="https://docs.cypress.io/guides/core-concepts/writing-and-organizing-tests#Excluding-and-Including-Tests">a Cypress modifier to exclude other tests</a>
to see results for the single test being developed for the quick
iterations. But accidentally pushing it to the repository
<a href="https://github.com/cypress-io/cypress/issues/6536#issue-569342230">creates many unwanted problems</a>
for anyone involved.</p>
<p>The solution from that article I was using for some time is very basic, yet
probably not too portable. It was working for me, but sadly, recently I did
not have too much luck trying to make it work with the recently released
Husky 7.0. I have switched to the npm package called
<a href="https://www.npmjs.com/package/stop-only">stop-only</a> since and cannot
complain.</p>
<h2 id="using-stop-only-with-husky-7-0">Using stop-only with Husky 7.0</h2>
<p>Setup Husky automatically:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">npx</span><span> husky-init && </span><span style="color:#bf616a;">npm</span><span> install
</span></code></pre>
<p>The above does multiple changes to your git repository:</p>
<ul>
<li>Installs Husky into dev dependencies.</li>
<li>Enables git hooks.</li>
<li>Adds a
<a href="https://docs.npmjs.com/cli/v7/using-npm/scripts#life-cycle-scripts">prepare script</a>
into <code>package.json</code>.</li>
<li>Bootstraps the <code>.husky/</code> folder where hooks reside.</li>
</ul>
<p>Remove the bootstrapped example <code>pre-commit</code> file:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">rm</span><span> .husky/pre-commit
</span></code></pre>
<p>Generate the <code>pre-push</code> Husky hook via command:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">npx</span><span> husky add .husky/pre-push "</span><span style="color:#a3be8c;">npx stop-only --folder cypress/integration</span><span>"
</span></code></pre>
<p>This will create the <code>.husky/pre-push</code> hook file with the following
contents:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#65737e;">#!/bin/sh
</span><span style="color:#96b5b4;">. </span><span>"$</span><span style="color:#a3be8c;">(</span><span style="color:#bf616a;">dirname </span><span>"$</span><span style="color:#bf616a;">0</span><span>"</span><span style="color:#a3be8c;">)/_/husky.sh</span><span>"
</span><span>
</span><span style="color:#bf616a;">npx</span><span> stop-only</span><span style="color:#bf616a;"> --folder</span><span> cypress/integration
</span></code></pre>
<p>Don't forget to start tracking the file:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> add .husky/pre-push
</span></code></pre>
<p>Next, install stop-only and Cypress itself, both as dev dependencies:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">npm</span><span> install</span><span style="color:#bf616a;"> --save-dev</span><span> stop-only cypress
</span></code></pre>
<p>And create an example test containing <code>.only()</code>, for instance in the base
Cypress test path of <code>cypress/integration/spec.js</code>:</p>
<pre data-lang="typescript" style="background-color:#2b303b;color:#c0c5ce;" class="language-typescript "><code class="language-typescript" data-lang="typescript"><span style="color:#65737e;">/// <</span><span style="color:#bf616a;">reference </span><span style="color:#d08770;">types</span><span>="</span><span style="color:#a3be8c;">cypress</span><span>" </span><span style="color:#65737e;">/>
</span><span style="color:#8fa1b3;">describe</span><span>("</span><span style="color:#a3be8c;">Simplest test should</span><span>", () </span><span style="color:#b48ead;">=> </span><span>{
</span><span> </span><span style="color:#bf616a;">it</span><span>.</span><span style="color:#8fa1b3;">only</span><span>("</span><span style="color:#a3be8c;">visit base URL</span><span>", () </span><span style="color:#b48ead;">=> </span><span>{
</span><span> </span><span style="color:#bf616a;">cy</span><span>.</span><span style="color:#8fa1b3;">visit</span><span>("</span><span style="color:#a3be8c;">/</span><span>")
</span><span> })
</span><span>})
</span></code></pre>
<p>Pushing to the remote repository is now prevented in an early and
spectacular fashion:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">$</span><span> git push
</span><span style="color:#bf616a;">Found</span><span> .only here 👎
</span><span style="color:#bf616a;">cypress/integration/spec.js:3:</span><span> it.only("</span><span style="color:#a3be8c;">visit base URL</span><span>", () => {
</span><span>husky - pre-push hook exited with code 1 (error)
</span></code></pre>
<p>The full example is available in the
<a href="https://github.com/peterbabic/sources-peterbabic.dev/tree/master/prevent-push-when-skipping-cypress-tests-pt-2">repository</a>.
Happy testing!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://github.com/bahmutov/stop-only#readme">https://github.com/bahmutov/stop-only#readme</a></li>
<li><a href="https://typicode.github.io/husky/">https://typicode.github.io/husky/</a></li>
</ul>
Convenient relationship factories in Laravel 82021-07-26T00:00:00+00:002021-07-26T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/convenient-relationship-factories-in-laravel-8/<p>One of the changes in Laravel 8 was
<a href="https://laravel.com/docs/8.x/upgrade#model-factories">the overhaul of the model factories</a>
which led to
<a href="https://laravel.com/docs/8.x/upgrade#seeder-factory-namespaces">factories being namespaced</a>.
Seeders are also affected in the same way, but this is a different topic
for now.</p>
<p>Now I did not paid enough attention to grasp why such change was introduced
or even necessary, but it is at the very top of the list of the
<a href="https://laravel.com/docs/8.x/upgrade#high-impact-changes">high impact changes</a>,
so I decided to play along.</p>
<p>Just a few days into usage I was searching for a way to efficiently
generate multiple records with one-to-one relationship. Maybe I was
searching for wrong keywords or maybe just the planets were just not
aligned, but all the solutions I could find looked too complicated. This
was true until I stumbled upon
<a href="https://stackoverflow.com/a/66371100/1972509">this humble StackOverflow answer</a>
where an exactly right solution was presented. Let's look at it.</p>
<h2 id="a-lair">A Lair</h2>
<p>For the lair, most of the files are absolutely bare bones, could be used
straight as Artisan generates them. Keep in mind that this is intended to
be a minimal (hopefully) working example. We could add a <code>hasOne</code> relation
here later, but for the actual example to work it is not required.</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span style="color:#b48ead;">namespace </span><span>App\Models;
</span><span>
</span><span style="color:#b48ead;">use </span><span>Illuminate\Database\Eloquent\</span><span style="color:#ebcb8b;">Model</span><span>;
</span><span>
</span><span style="color:#b48ead;">class </span><span style="color:#ebcb8b;">Lair </span><span style="color:#b48ead;">extends </span><span style="color:#a3be8c;">Model </span><span style="color:#eff1f5;">{
</span><span style="color:#eff1f5;">
</span><span style="color:#eff1f5;">}
</span></code></pre>
<p>A migration is also bare bones, but for real-life lair, we would probably
add some columns like name of the mountain where is it located.</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span style="color:#b48ead;">use </span><span>Illuminate\Database\Migrations\</span><span style="color:#ebcb8b;">Migration</span><span>;
</span><span style="color:#b48ead;">use </span><span>Illuminate\Database\Schema\</span><span style="color:#ebcb8b;">Blueprint</span><span>;
</span><span style="color:#b48ead;">use </span><span>Illuminate\Support\Facades\</span><span style="color:#ebcb8b;">Schema</span><span>;
</span><span>
</span><span style="color:#b48ead;">class </span><span style="color:#ebcb8b;">CreateLairsTable </span><span style="color:#b48ead;">extends </span><span style="color:#a3be8c;">Migration </span><span style="color:#eff1f5;">{
</span><span style="color:#eff1f5;">
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">public function </span><span style="color:#8fa1b3;">up</span><span style="color:#eff1f5;">() {
</span><span style="color:#eff1f5;"> </span><span style="color:#ebcb8b;">Schema</span><span style="color:#eff1f5;">::</span><span style="color:#bf616a;">create</span><span style="color:#eff1f5;">(</span><span>'</span><span style="color:#a3be8c;">lairs</span><span>'</span><span style="color:#eff1f5;">, </span><span style="color:#b48ead;">function </span><span style="color:#eff1f5;">(</span><span style="color:#ebcb8b;">Blueprint </span><span>$</span><span style="color:#bf616a;">table</span><span style="color:#eff1f5;">) {
</span><span style="color:#eff1f5;"> </span><span>$</span><span style="color:#bf616a;">table</span><span style="color:#eff1f5;">-></span><span style="color:#bf616a;">id</span><span style="color:#eff1f5;">();
</span><span style="color:#eff1f5;"> </span><span>$</span><span style="color:#bf616a;">table</span><span style="color:#eff1f5;">-></span><span style="color:#bf616a;">timestamps</span><span style="color:#eff1f5;">();
</span><span style="color:#eff1f5;"> });
</span><span style="color:#eff1f5;"> }
</span><span style="color:#eff1f5;">
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">public function </span><span style="color:#8fa1b3;">down</span><span style="color:#eff1f5;">() {
</span><span style="color:#eff1f5;"> </span><span style="color:#ebcb8b;">Schema</span><span style="color:#eff1f5;">::</span><span style="color:#bf616a;">dropIfExists</span><span style="color:#eff1f5;">(</span><span>'</span><span style="color:#a3be8c;">lairs</span><span>'</span><span style="color:#eff1f5;">);
</span><span style="color:#eff1f5;"> }
</span><span style="color:#eff1f5;">}
</span></code></pre>
<p>The above holds true here for the factory, straight from the generator.
Since we did not add any specific columns, we do not need to fake any
values here.</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span style="color:#b48ead;">namespace </span><span>Database\Factories;
</span><span>
</span><span style="color:#b48ead;">use </span><span>Illuminate\Database\Eloquent\Factories\</span><span style="color:#ebcb8b;">Factory</span><span>;
</span><span style="color:#b48ead;">use </span><span>App\Models\</span><span style="color:#ebcb8b;">Lair</span><span>;
</span><span>
</span><span style="color:#b48ead;">class </span><span style="color:#ebcb8b;">LairFactory </span><span style="color:#b48ead;">extends </span><span style="color:#a3be8c;">Factory </span><span style="color:#eff1f5;">{
</span><span style="color:#eff1f5;">
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">protected </span><span>$</span><span style="color:#bf616a;">model </span><span>= </span><span style="color:#ebcb8b;">Lair</span><span style="color:#eff1f5;">::</span><span style="color:#d08770;">class</span><span style="color:#eff1f5;">;
</span><span style="color:#eff1f5;">
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">public function </span><span style="color:#8fa1b3;">definition</span><span style="color:#eff1f5;">() {
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">return </span><span style="color:#eff1f5;">[
</span><span style="color:#eff1f5;"> </span><span style="color:#65737e;">//
</span><span style="color:#eff1f5;"> ];
</span><span style="color:#eff1f5;"> }
</span><span style="color:#eff1f5;">}
</span></code></pre>
<h2 id="the-dragon">The Dragon</h2>
<p>There's a first important bit here, a <code>lair_id</code> column marking a foreign
key.</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span style="color:#b48ead;">use </span><span>Illuminate\Database\Migrations\</span><span style="color:#ebcb8b;">Migration</span><span>;
</span><span style="color:#b48ead;">use </span><span>Illuminate\Database\Schema\</span><span style="color:#ebcb8b;">Blueprint</span><span>;
</span><span style="color:#b48ead;">use </span><span>Illuminate\Support\Facades\</span><span style="color:#ebcb8b;">Schema</span><span>;
</span><span>
</span><span style="color:#b48ead;">class </span><span style="color:#ebcb8b;">CreateDragonsTable </span><span style="color:#b48ead;">extends </span><span style="color:#a3be8c;">Migration </span><span style="color:#eff1f5;">{
</span><span style="color:#eff1f5;">
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">public function </span><span style="color:#8fa1b3;">up</span><span style="color:#eff1f5;">() {
</span><span style="color:#eff1f5;"> </span><span style="color:#ebcb8b;">Schema</span><span style="color:#eff1f5;">::</span><span style="color:#bf616a;">create</span><span style="color:#eff1f5;">(</span><span>'</span><span style="color:#a3be8c;">dragons</span><span>'</span><span style="color:#eff1f5;">, </span><span style="color:#b48ead;">function </span><span style="color:#eff1f5;">(</span><span style="color:#ebcb8b;">Blueprint </span><span>$</span><span style="color:#bf616a;">table</span><span style="color:#eff1f5;">) {
</span><span style="color:#eff1f5;"> </span><span>$</span><span style="color:#bf616a;">table</span><span style="color:#eff1f5;">-></span><span style="color:#bf616a;">id</span><span style="color:#eff1f5;">();
</span><span style="color:#eff1f5;"> </span><span>$</span><span style="color:#bf616a;">table</span><span style="color:#eff1f5;">-></span><span style="color:#bf616a;">foreignId</span><span style="color:#eff1f5;">(</span><span>'</span><span style="color:#a3be8c;">lair_id</span><span>'</span><span style="color:#eff1f5;">)-></span><span style="color:#bf616a;">constrained</span><span style="color:#eff1f5;">();
</span><span style="color:#eff1f5;"> </span><span>$</span><span style="color:#bf616a;">table</span><span style="color:#eff1f5;">-></span><span style="color:#bf616a;">timestamps</span><span style="color:#eff1f5;">();
</span><span style="color:#eff1f5;"> });
</span><span style="color:#eff1f5;"> }
</span><span style="color:#eff1f5;">
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">public function </span><span style="color:#8fa1b3;">down</span><span style="color:#eff1f5;">() {
</span><span style="color:#eff1f5;"> </span><span style="color:#ebcb8b;">Schema</span><span style="color:#eff1f5;">::</span><span style="color:#bf616a;">dropIfExists</span><span style="color:#eff1f5;">(</span><span>'</span><span style="color:#a3be8c;">dragons</span><span>'</span><span style="color:#eff1f5;">);
</span><span style="color:#eff1f5;"> }
</span><span style="color:#eff1f5;">}
</span></code></pre>
<p>A model for the Dragon on contains a <code>belongsTo</code> relationship. Although the
example would work without this method, it is here to signify that there
could be many Lairs throughout the land but should there exist any Dragon,
he has to be living in one of them.</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span style="color:#b48ead;">namespace </span><span>App\Models;
</span><span>
</span><span style="color:#b48ead;">use </span><span>Illuminate\Database\Eloquent\Factories\</span><span style="color:#ebcb8b;">HasFactory</span><span>;
</span><span style="color:#b48ead;">use </span><span>Illuminate\Database\Eloquent\</span><span style="color:#ebcb8b;">Model</span><span>;
</span><span style="color:#b48ead;">use </span><span>App\Models\</span><span style="color:#ebcb8b;">Lair</span><span>;
</span><span>
</span><span style="color:#b48ead;">class </span><span style="color:#ebcb8b;">Dragon </span><span style="color:#b48ead;">extends </span><span style="color:#a3be8c;">Model </span><span style="color:#eff1f5;">{
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">use </span><span style="color:#a3be8c;">HasFactory</span><span style="color:#eff1f5;">;
</span><span style="color:#eff1f5;">
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">public function </span><span style="color:#8fa1b3;">lair</span><span style="color:#eff1f5;">() {
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">return </span><span>$</span><span style="color:#bf616a;">this</span><span style="color:#eff1f5;">-></span><span style="color:#bf616a;">belongsTo</span><span style="color:#eff1f5;">(</span><span style="color:#ebcb8b;">Lair</span><span style="color:#eff1f5;">::</span><span style="color:#d08770;">class</span><span style="color:#eff1f5;">);
</span><span style="color:#eff1f5;"> }
</span><span style="color:#eff1f5;">}
</span></code></pre>
<p>And finally a promised factory. Again, almost bare bones with one extra
line:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span style="color:#b48ead;">namespace </span><span>Database\Factories;
</span><span>
</span><span style="color:#b48ead;">use </span><span>Illuminate\Database\Eloquent\Factories\</span><span style="color:#ebcb8b;">Factory</span><span>;
</span><span style="color:#b48ead;">use </span><span>App\Models\</span><span style="color:#ebcb8b;">Lair</span><span>;
</span><span style="color:#b48ead;">use </span><span>App\Models\</span><span style="color:#ebcb8b;">Dragon</span><span>;
</span><span>
</span><span style="color:#b48ead;">class </span><span style="color:#ebcb8b;">DragonFactory </span><span style="color:#b48ead;">extends </span><span style="color:#a3be8c;">Factory </span><span style="color:#eff1f5;">{
</span><span style="color:#eff1f5;">
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">protected </span><span>$</span><span style="color:#bf616a;">model </span><span>= </span><span style="color:#ebcb8b;">Dragon</span><span style="color:#eff1f5;">::</span><span style="color:#d08770;">class</span><span style="color:#eff1f5;">;
</span><span style="color:#eff1f5;">
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">public function </span><span style="color:#8fa1b3;">definition</span><span style="color:#eff1f5;">() {
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">return </span><span style="color:#eff1f5;">[
</span><span style="color:#eff1f5;"> </span><span>'</span><span style="color:#a3be8c;">lair_id</span><span>' => </span><span style="color:#ebcb8b;">Lair</span><span style="color:#eff1f5;">::</span><span style="color:#bf616a;">factory</span><span style="color:#eff1f5;">()
</span><span style="color:#eff1f5;"> ];
</span><span style="color:#eff1f5;"> }
</span><span style="color:#eff1f5;">}
</span></code></pre>
<p>The line specifically is this:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>'lair_id' => Lair::factory()->create()->id
</span></code></pre>
<p>The result of the above is that now we can generate many Dragons at once
via <code>DragonFactory</code> like so:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span style="color:#b48ead;">use </span><span>App\Models\</span><span style="color:#ebcb8b;">Dragon</span><span>;
</span><span>
</span><span style="color:#65737e;">//
</span><span>
</span><span style="color:#ebcb8b;">Dragon</span><span>::</span><span style="color:#bf616a;">factory</span><span>(</span><span style="color:#d08770;">200</span><span>);
</span></code></pre>
<p>Here, every Dragon would have it's own Lair generated with him. Note that a
similar approach could be used in Laravel 7 and below as well, but here
wanted to express the namespacing changes Laravel 8 brought in. Convenient.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://laravel.com/docs/8.x/database-testing#defining-relationships-within-factories">https://laravel.com/docs/8.x/database-testing#defining-relationships-within-factories</a></li>
</ul>
A basic InertiaJS test macro2021-07-24T00:00:00+00:002021-07-24T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/basic-inertiajs-test-macro/<p>I've made a macro for the <code>Illuminate\Testing\TestResponse</code> class that I
put into the <code>TestCase.php</code> file which is
<a href="https://github.com/inertiajs/pingcrm-svelte/blob/6c1bedcd530c704082b425bdf3d5d3c9916d8c36/tests/TestCase.php">a part of pingcrm-svelte</a>.
I currently use this short macro in basically all HTTP tests for Inertia
related endpoints in Laravel, so unless I am doing something wrong, it can
be considered quite helpful. Take a look:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span style="color:#b48ead;">use </span><span>Illuminate\Testing\</span><span style="color:#ebcb8b;">TestResponse</span><span>;
</span><span style="color:#b48ead;">use </span><span>Illuminate\Foundation\Testing\</span><span style="color:#ebcb8b;">TestCase </span><span style="color:#b48ead;">as </span><span style="color:#ebcb8b;">BaseTestCase</span><span>;
</span><span style="color:#b48ead;">use </span><span>Inertia\Testing\</span><span style="color:#ebcb8b;">Assert </span><span style="color:#b48ead;">as </span><span style="color:#ebcb8b;">InertiaAssert</span><span>;
</span><span style="color:#65737e;">// use PHPUnit\Framework\Assert;
</span><span>
</span><span style="color:#b48ead;">abstract class </span><span style="color:#ebcb8b;">TestCase </span><span style="color:#b48ead;">extends </span><span style="color:#a3be8c;">BaseTestCase </span><span style="color:#eff1f5;">{
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">protected function </span><span style="color:#8fa1b3;">setUp</span><span style="color:#eff1f5;">(): </span><span style="color:#ebcb8b;">void </span><span style="color:#eff1f5;">{
</span><span style="color:#eff1f5;"> </span><span style="color:#bf616a;">parent</span><span style="color:#eff1f5;">::</span><span style="color:#bf616a;">setUp</span><span style="color:#eff1f5;">();
</span><span style="color:#eff1f5;">
</span><span style="color:#eff1f5;"> </span><span style="color:#ebcb8b;">TestResponse</span><span style="color:#eff1f5;">::</span><span style="color:#bf616a;">macro</span><span style="color:#eff1f5;">(</span><span>'</span><span style="color:#a3be8c;">assertInertiaComponent</span><span>'</span><span style="color:#eff1f5;">, </span><span style="color:#b48ead;">function </span><span style="color:#eff1f5;">(</span><span>$</span><span style="color:#bf616a;">component</span><span style="color:#eff1f5;">) {
</span><span style="color:#eff1f5;"> </span><span style="color:#b48ead;">return </span><span>$</span><span style="color:#bf616a;">this</span><span style="color:#eff1f5;">-></span><span style="color:#bf616a;">assertStatus</span><span style="color:#eff1f5;">(</span><span style="color:#d08770;">200</span><span style="color:#eff1f5;">)-></span><span style="color:#bf616a;">assertInertia</span><span style="color:#eff1f5;">(</span><span style="color:#b48ead;">function </span><span style="color:#eff1f5;">(
</span><span style="color:#eff1f5;"> </span><span style="color:#ebcb8b;">InertiaAssert </span><span>$</span><span style="color:#bf616a;">page</span><span style="color:#eff1f5;">,
</span><span style="color:#eff1f5;"> ) </span><span style="color:#b48ead;">use </span><span style="color:#eff1f5;">(</span><span>$</span><span style="color:#bf616a;">component</span><span style="color:#eff1f5;">) {
</span><span style="color:#eff1f5;"> </span><span>$</span><span style="color:#bf616a;">page</span><span style="color:#eff1f5;">-></span><span style="color:#bf616a;">component</span><span style="color:#eff1f5;">(</span><span>$</span><span style="color:#bf616a;">component</span><span style="color:#eff1f5;">);
</span><span style="color:#eff1f5;"> });
</span><span style="color:#eff1f5;"> });
</span><span style="color:#eff1f5;"> }
</span><span style="color:#eff1f5;">}
</span></code></pre>
<p>To use it inside the HTTP test with PHPUnit it can now be employed like
this:</p>
<pre data-lang="php" style="background-color:#2b303b;color:#c0c5ce;" class="language-php "><code class="language-php" data-lang="php"><span style="color:#ab7967;"><?php
</span><span style="color:#b48ead;">public function </span><span style="color:#8fa1b3;">test_user_can_see_items</span><span>() {
</span><span> $</span><span style="color:#bf616a;">this</span><span>-></span><span style="color:#bf616a;">actingAs</span><span>($</span><span style="color:#bf616a;">this</span><span>-></span><span style="color:#bf616a;">user</span><span>)
</span><span> -></span><span style="color:#bf616a;">get</span><span>('</span><span style="color:#a3be8c;">/item</span><span>')
</span><span> -></span><span style="color:#bf616a;">assertInertiaComponent</span><span>('</span><span style="color:#a3be8c;">Item/Index</span><span>');
</span><span>}
</span></code></pre>
<p>It can definitely be made different or better, but hey, it's a good start
for me.</p>
CORS problems with InertiaJS and Browsersync2021-07-23T00:00:00+00:002021-07-23T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/cors-problems-with-inertiajs-and-browsersync/<p>InertiaJS is a really impressive approach to building fullstack web
applications. I've probably first heard about it in the Javascript Jabber
from devchat.tv in
<a href="https://devchat.tv/js-jabber/jsj-443-all-about-inertiajs-with-jonathan-reinink/">episode 443</a>.
Adopting it was really straightforward as I had previous experiences with
Laravel, TailwindCSS and Svelte (which is still my choice for front-end).</p>
<h2 id="the-problem">The problem</h2>
<p>The only problem I keep seeing is this error message:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://localhost/item?page=2. (Reason: CORS request did not succeed)
</span></code></pre>
<p>CORS or Cross-Origin Resource Sharing is a security feature, and it is
nothing new. The reason the message is showing in my situation is because
of the
<a href="https://laravel.com/docs/8.x/mix#browsersync-reloading">Browsersync reloading in Laravel Mix</a>.
The way Browsersync works is that it is proxying the host URL, in my case
<code>http://localhost</code> into another one that is fully under control of the
Browsersync, the default one being <code>http://localhost:3000/</code>. I am not
entirely sure what happens under the Browsersync's hood at this point, so
feel free to let me know if there is a better way to explain it. Surely
there is.</p>
<h2 id="a-solution">A solution</h2>
<p>Anyway, there seem to be an accepted solution how to deal with this
problem. I've included most relevant-ish links down below, but in general,
two steps are required:</p>
<h3 id="step-1-configure-browsersync-options">Step 1. Configure Browsersync options</h3>
<p>If using Browsersync via Laravel Mix, insert the following in the
<code>webpack.mix.js</code>:</p>
<pre data-lang="js" style="background-color:#2b303b;color:#c0c5ce;" class="language-js "><code class="language-js" data-lang="js"><span>.</span><span style="color:#8fa1b3;">browserSync</span><span>({
</span><span> proxy: '</span><span style="color:#a3be8c;">localhost</span><span>',
</span><span> host: '</span><span style="color:#a3be8c;">localhost:3000</span><span>'
</span><span>})
</span></code></pre>
<h3 id="step-2-inform-the-front-end">Step 2. Inform the front-end</h3>
<p>Insert the following line in to <code>resources/views/app.blade.php</code>, almost at
the very bottom of the page:</p>
<pre data-lang="html" style="background-color:#2b303b;color:#c0c5ce;" class="language-html "><code class="language-html" data-lang="html"><span>@if (app()->isLocal())
</span><span><</span><span style="color:#bf616a;">script </span><span style="color:#d08770;">src</span><span>="</span><span style="color:#a3be8c;">http://localhost:3000/browser-sync/browser-sync-client.js</span><span>"></</span><span style="color:#bf616a;">script</span><span>>
</span><span>@endif
</span><span style="color:#65737e;"><!-- here is the end of the page
</span><span style="color:#65737e;"> </body>
</span><span style="color:#65737e;"></html>
</span><span style="color:#65737e;">-->
</span></code></pre>
<h3 id="step-3-watch">Step 3. Watch</h3>
<p>Now start watching the app in the browser tab at <code>http://localhost:3000</code>
that gets opened and then reloaded automatically when resources change:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">npm</span><span> run watch
</span></code></pre>
<p>I have two problems with this solution. The first is that the script part
gets propagated into the production bundle, but is pointing at the
non-existent file. Not that much of a problem and can be solved, although
the solution should be readily offered.</p>
<p>The second, worse one is that on the things like pagination, the URL does
get proxied only after the full page refresh. After navigating in an
InertiaJS app, the proxy stops working, which is quite distracting during
the development. I will try to open the issue when I learn more about the
actual behavior.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://warrickbayman.medium.com/browsersync-and-inertia-8e3ed647669a">https://warrickbayman.medium.com/browsersync-and-inertia-8e3ed647669a</a></li>
<li><a href="https://angle.software/configuring-mix-livereload-browsersync-with-inertiajs/">https://angle.software/configuring-mix-livereload-browsersync-with-inertiajs/</a></li>
<li><a href="https://forum.laravel-livewire.com/t/getting-cors-error-on-file-upload-probably-related-to-browsersync/1565/2">https://forum.laravel-livewire.com/t/getting-cors-error-on-file-upload-probably-related-to-browsersync/1565/2</a></li>
<li><a href="https://laracasts.com/discuss/channels/code-review/laravel-redirect-failing-cors?page=1&replyId=634427">https://laracasts.com/discuss/channels/code-review/laravel-redirect-failing-cors?page=1&replyId=634427</a></li>
<li><a href="https://github.com/inertiajs/inertia-laravel/issues/57#issuecomment-570581851">https://github.com/inertiajs/inertia-laravel/issues/57#issuecomment-570581851</a></li>
</ul>
Prettier PHP plugin in vim2021-07-22T00:00:00+00:002021-07-22T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/prettier-php-plugin-in-vim/<p>After spending a few hours trying to make chained methods in PHP arrange
itself below one and each other in a tidy manner, I have finally found a
solution. In other words, on a file save I wanted to go from this:</p>
<pre data-lang="js" style="background-color:#2b303b;color:#c0c5ce;" class="language-js "><code class="language-js" data-lang="js"><span style="color:#bf616a;">$this</span><span>-></span><span style="color:#bf616a;">user</span><span>-></span><span style="color:#bf616a;">account</span><span>-></span><span style="color:#8fa1b3;">organizations</span><span>()-></span><span style="color:#8fa1b3;">saveMany</span><span>(</span><span style="color:#bf616a;">Organization</span><span>::</span><span style="color:#8fa1b3;">factory</span><span>(</span><span style="color:#d08770;">5</span><span>)
</span><span>-></span><span style="color:#8fa1b3;">make</span><span>())-></span><span style="color:#8fa1b3;">first</span><span>()-></span><span style="color:#8fa1b3;">update</span><span>(['</span><span style="color:#a3be8c;">name</span><span>' </span><span style="color:#b48ead;">=> </span><span>'</span><span style="color:#a3be8c;">A Big Brand Name</span><span>']);
</span></code></pre>
<p>To something resembling this:</p>
<pre data-lang="js" style="background-color:#2b303b;color:#c0c5ce;" class="language-js "><code class="language-js" data-lang="js"><span style="color:#bf616a;">$this</span><span>-></span><span style="color:#bf616a;">user</span><span>-></span><span style="color:#bf616a;">account
</span><span> -></span><span style="color:#8fa1b3;">organizations</span><span>()
</span><span> -></span><span style="color:#8fa1b3;">saveMany</span><span>(</span><span style="color:#bf616a;">Organization</span><span>::</span><span style="color:#8fa1b3;">factory</span><span>(</span><span style="color:#d08770;">5</span><span>)-></span><span style="color:#8fa1b3;">make</span><span>())
</span><span> -></span><span style="color:#8fa1b3;">first</span><span>()
</span><span> -></span><span style="color:#8fa1b3;">update</span><span>(['</span><span style="color:#a3be8c;">name</span><span>' </span><span style="color:#b48ead;">=> </span><span>'</span><span style="color:#a3be8c;">A Big Brand Name</span><span>']);
</span></code></pre>
<p>The above is clearly easier to read and thus it takes less time to
understand what the code does.</p>
<h2 id="what-did-not-work-for-me">What did not work for me</h2>
<p>Here are a few various possibly unrelated methods to deal with the problem
that did not work, in no particular order.</p>
<h3 id="coc-prettier">coc-prettier</h3>
<p>I use <a href="https://github.com/neoclide/coc-prettier">coc-prettier</a> in my
current neovim setup, especially for it's ease of use on Markdown
(prose-wrap anyone?) and Javascript. PHP is however not supported by
prettier out of the box, and is rather supplied as a community maintained
plugin under <a href="https://github.com/prettier/plugin-php">prettier/plugin-php</a>.</p>
<p>Currently it looks like
<a href="https://github.com/neoclide/coc-prettier/issues/79#issuecomment-855403473">these two do not play along</a>.
The response is fairly recent and there is definitely a potential for
prettier plugins under <code>coc-prettier</code>, sadly I could not find anything more
on the topic.</p>
<h3 id="intelephense-in-coc-phpls-wit-coc-prettier">Intelephense in coc-phpls wit coc-prettier</h3>
<p>In conjunction with the <code>coc-prettier</code> above,
<a href="https://github.com/marlonfan/coc-phpls">coc-phpls</a> can do PHP formatting
on save as well with these two relevant settings in <code>:CocConfig</code> below:</p>
<pre data-lang="json" style="background-color:#2b303b;color:#c0c5ce;" class="language-json "><code class="language-json" data-lang="json"><span>"</span><span style="color:#a3be8c;">intelephense.format.enable</span><span>": </span><span style="color:#d08770;">true</span><span>,
</span><span>"</span><span style="color:#a3be8c;">coc.preferences.formatOnSaveFiletypes</span><span>": ["</span><span style="color:#a3be8c;">php</span><span>"]
</span></code></pre>
<p>Sadly, at the time of writing, the only formatter configuration option is
<code>intelephense.format.braces</code>. This setting has no effect on aligning
chained PHP methods. It also somehow conflicts with prettier's <code>tabWidth</code>
if the hard-coded Intelephense value is different. I ended up
removing/turning off both above options.</p>
<h3 id="vim-phpfmt">vim-phpfmt</h3>
<p>I've had absolutely no luck with the
<a href="https://github.com/beanworks/vim-phpfmt">vim-phpfmt</a> plugin whatsoever in
regards to aligning chained methods in a PHP code. It could be expected as
the plugin was not updated for more than a 5 years.</p>
<p>It utilizes <code>phpcbf</code> called the PHP Code Beautifier and Fixer from
<a href="https://github.com/squizlabs/PHP_CodeSniffer">PHP CodeSniffer</a> package
which is under active development. I believe this approach could work, but
I am not sure how long would it take to get it to work.</p>
<h3 id="inotifywait-script">inotifywait script</h3>
<p>At one point I tried to utilize the
<a href="/blog/dead-simple-laravel-test-watcher/">test watcher script</a> by running
globally installed prettier on the changed file, since I already had it up
and running:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#65737e;">#!/bin/bash
</span><span style="color:#b48ead;">while </span><span style="color:#bf616a;">true</span><span>; </span><span style="color:#b48ead;">do
</span><span> </span><span style="color:#bf616a;">FILE</span><span>=$</span><span style="color:#a3be8c;">(</span><span style="color:#bf616a;">inotifywait --recursive </span><span style="color:#a3be8c;">\
</span><span style="color:#bf616a;"> --exclude</span><span>="</span><span style="color:#a3be8c;">.*.*sw*</span><span>"</span><span style="color:#bf616a;"> --exclude</span><span>="</span><span style="color:#a3be8c;">4913</span><span>" </span><span style="color:#a3be8c;">\
</span><span style="color:#a3be8c;"> ./watch_this_folder</span><span style="color:#bf616a;"> --format </span><span>"</span><span style="color:#a3be8c;">%w%f</span><span>"</span><span style="color:#bf616a;"> -e</span><span style="color:#a3be8c;"> close_write)
</span><span> && </span><span style="color:#bf616a;">clear
</span><span> && </span><span style="color:#bf616a;">prettier --parser</span><span>=php</span><span style="color:#bf616a;"> -w </span><span>"$</span><span style="color:#bf616a;">FILE</span><span>"
</span><span style="color:#b48ead;">done
</span></code></pre>
<p>But I had numerous issues with this approach ranging from delayed tests,
through vim not re-rendering reformatted file to random file or even entire
folders being reformatted on a short notice, so I did not continue down
this path.</p>
<h2 id="what-works">What works</h2>
<p>There are two solutions I found are working reasonably well for aligning
chained methods in PHP, both relying on prettier.</p>
<h3 id="prettier-plugin-php-vimscript">prettier/plugin-php vimscript</h3>
<p>The
<a href="/blog/vim-filter-contents-replaced-wtih-error/">modified vimscript I wrote about yesterday</a>
worked and I thought I stick with it. Go take a look over there for more
details about the approach.</p>
<h3 id="vim-prettier">vim-prettier</h3>
<p>I had no luck with the elaborate solution outlined in
<a href="https://github.com/prettier/vim-prettier/issues/119#issuecomment-371766861">#119</a>
for vim-prettier, which is currently linked as part of the
<a href="https://github.com/prettier/plugin-php/blob/d57e587893aa4ee6bae4edf21b8ec8312f2b3fd0/README.md#vim">documentation</a>.
I also found it weird to have both vim-prettier and coc-prettier installed
as a vim plugin.</p>
<p>However, when I was documenting it all down, by a struck of chance I have
found a gem in <a href="https://github.com/prettier/vim-prettier/issues/263">#263</a>.
As far as I can tell, the solution requires just a few steps. Install
plugin-php as a project dependency:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">npm</span><span> install</span><span style="color:#bf616a;"> -D</span><span> @prettier/plugin-php
</span></code></pre>
<p>Then edit your <code>vimrc</code> file:</p>
<pre data-lang="vim" style="background-color:#2b303b;color:#c0c5ce;" class="language-vim "><code class="language-vim" data-lang="vim"><span>call </span><span style="color:#8fa1b3;">plug#begin</span><span>(</span><span style="color:#a3be8c;">'~/.vim/plugged'</span><span>)
</span><span> Plug </span><span style="color:#a3be8c;">'prettier/vim-prettier'</span><span>, { </span><span style="color:#a3be8c;">'do'</span><span>: </span><span style="color:#a3be8c;">'npm install'</span><span>, </span><span style="color:#a3be8c;">'for'</span><span>: [</span><span style="color:#a3be8c;">'php'</span><span>] }
</span><span>call </span><span style="color:#8fa1b3;">plug#end</span><span>()
</span><span>
</span><span style="color:#96b5b4;">autocmd</span><span> BufWritePre </span><span style="color:#b48ead;">*.</span><span>php PrettierAsync
</span></code></pre>
<p>Run <code>:PlugInstall</code> and you are ready to go. As we can see, the
<code>vim-prettier</code> is only enabled for PHP files, as others are handled by
<code>coc-prettier</code> in my setup. Seeing these two actually work together side-by
side without issues made me more comfortable with this setup and my
reluctance diminished.</p>
<p>What I really like about this setup is it's simplicity and also the fact it
respects the project-wide <code>.prettierrc</code> file, exactly according to my
taste, for example:</p>
<pre data-lang="json" style="background-color:#2b303b;color:#c0c5ce;" class="language-json "><code class="language-json" data-lang="json"><span>{
</span><span> "</span><span style="color:#a3be8c;">tabWidth</span><span>": </span><span style="color:#d08770;">4</span><span>,
</span><span> "</span><span style="color:#a3be8c;">semi</span><span>": </span><span style="color:#d08770;">false</span><span>,
</span><span> "</span><span style="color:#a3be8c;">singleQuote</span><span>": </span><span style="color:#d08770;">true</span><span>,
</span><span> "</span><span style="color:#a3be8c;">trailingComma</span><span>": "</span><span style="color:#a3be8c;">es5</span><span>",
</span><span> "</span><span style="color:#a3be8c;">trailingCommaPHP</span><span>": </span><span style="color:#d08770;">true</span><span>,
</span><span> "</span><span style="color:#a3be8c;">proseWrap</span><span>": "</span><span style="color:#a3be8c;">always</span><span>",
</span><span> "</span><span style="color:#a3be8c;">arrowParens</span><span>": "</span><span style="color:#a3be8c;">avoid</span><span>",
</span><span> "</span><span style="color:#a3be8c;">bracketSpacing</span><span>": </span><span style="color:#d08770;">true</span><span>,
</span><span> "</span><span style="color:#a3be8c;">phpVersion</span><span>": "</span><span style="color:#a3be8c;">8.0</span><span>",
</span><span> "</span><span style="color:#a3be8c;">braceStyle</span><span>": "</span><span style="color:#a3be8c;">1tbs</span><span>"
</span><span>}
</span></code></pre>
<p>Both standard prettier options as well as ones from
<a href="https://github.com/prettier/plugin-php#configuration">plugin-php configuration</a>
neatly in one place. The disadvantage is that if you want different
<code>tabWidth</code> for different file types across project, it could not be done
exactly this way. But then, this would go slightly against the prettier's
philosophy being opinionated and consistent.</p>
<p>I wish I've found this solution right at the beginning, but hey, better
later than never. Happy writing!</p>
Vim filter contents replaced with an error2021-07-21T00:00:00+00:002021-07-21T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/vim-filter-contents-replaced-wtih-error/<p>One of the features of vim is it's ability to filter contents of the file
through a command and return the output back to the buffer. This could be
used for example for fixing the indentation on the file or generally
formatting the document.</p>
<p>Part of the official
<a href="https://github.com/prettier/plugin-php/blob/d57e587893aa4ee6bae4edf21b8ec8312f2b3fd0/README.md#custom">documentation</a>
for the prettier/plugin-php is also this vimscript:</p>
<pre data-lang="vim" style="background-color:#2b303b;color:#c0c5ce;" class="language-vim "><code class="language-vim" data-lang="vim"><span style="color:#65737e;">" Prettier for PHP
</span><span style="color:#b48ead;">function </span><span style="color:#8fa1b3;">PrettierPhpCursor</span><span>()
</span><span> </span><span style="color:#96b5b4;">let</span><span> save_pos = </span><span style="color:#8fa1b3;">getpos</span><span>(</span><span style="color:#a3be8c;">"."</span><span>)
</span><span> </span><span style="color:#b48ead;">%</span><span>! prettier --parser=php
</span><span> call </span><span style="color:#8fa1b3;">setpos</span><span>(</span><span style="color:#a3be8c;">'.'</span><span>, save_pos)
</span><span style="color:#b48ead;">endfunction
</span><span style="color:#65737e;">" define custom command
</span><span style="color:#96b5b4;">command</span><span> PrettierPhp call </span><span style="color:#8fa1b3;">PrettierPhpCursor</span><span>()
</span><span style="color:#65737e;">" format on save
</span><span style="color:#96b5b4;">autocmd</span><span> BufwritePre </span><span style="color:#b48ead;">*.</span><span>php PrettierPhp
</span></code></pre>
<p>It uses a <code>prettier</code> command available in PATH, install for instance via:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">npm</span><span> install</span><span style="color:#bf616a;"> -g</span><span> prettier @prettier/plugin-php
</span></code></pre>
<p>The script works as advertised, the PHP file is formatted on save,
depending on prettier settings, it could look similar to this:</p>
<p><img src="https://peterbabic.dev/blog/vim-filter-contents-replaced-wtih-error/prettier-plugin-php-works.gif" alt="A PHP file is properly formated when saved in vim" /></p>
<p>The excessive spaces get removed, the missing ones are introduced and the
quoting is made consistent, among others formatting tasks it handles. Nice.</p>
<h2 id="the-problem">The problem</h2>
<p>But this fairytale situation is more dire, if the PHP code contains a
syntax error or a similar problem:</p>
<p><img src="https://peterbabic.dev/blog/vim-filter-contents-replaced-wtih-error/prettier-plugin-php-error-vim-buffer.gif" alt="Using vim filter file content replaces the current buffer with the error output" /></p>
<p>Not only one is distracted with the <code>shell returned 2</code> message but also
<strong>Type ENTER or type command to continue</strong>, which replaces the contents of
the current buffer with the error message.</p>
<p>This can be undone by pressing <code>u</code> but then the cursor position on the top
of the file. Getting cursor back where it was are even more unnecessary
keystrokes.</p>
<h2 id="the-solution">The solution</h2>
<p>By tweaking the mentioned vimscript a little, I was able to get a more
pleasurable solution out of the setup:</p>
<pre data-lang="vim" style="background-color:#2b303b;color:#c0c5ce;" class="language-vim "><code class="language-vim" data-lang="vim"><span style="color:#65737e;">" Prettier for PHP
</span><span style="color:#b48ead;">function </span><span style="color:#8fa1b3;">PrettierPhpCursor</span><span>()
</span><span> </span><span style="color:#96b5b4;">let</span><span> save_pos = </span><span style="color:#8fa1b3;">getpos</span><span>(</span><span style="color:#a3be8c;">"."</span><span>)
</span><span> </span><span style="color:#b48ead;">%</span><span>! prettier --parser=php
</span><span style="color:#65737e;"> " undo automatically on error
</span><span> </span><span style="color:#b48ead;">if </span><span style="color:#bf616a;">v:shell_error</span><span> | silent undo | </span><span style="color:#b48ead;">endif
</span><span> call </span><span style="color:#8fa1b3;">setpos</span><span>(</span><span style="color:#a3be8c;">'.'</span><span>, save_pos)
</span><span style="color:#b48ead;">endfunction
</span><span style="color:#65737e;">" define custom command
</span><span style="color:#96b5b4;">command</span><span> PrettierPhp call </span><span style="color:#8fa1b3;">PrettierPhpCursor</span><span>()
</span><span style="color:#65737e;">" format on save silently
</span><span style="color:#96b5b4;">autocmd</span><span> BufwritePre </span><span style="color:#b48ead;">*.</span><span>php silent PrettierPhp
</span></code></pre>
<p>Also, prettier command has numerous options to set, either via
<code>.prettierrc</code> file or right in the vimscript like:</p>
<pre data-lang="vim" style="background-color:#2b303b;color:#c0c5ce;" class="language-vim "><code class="language-vim" data-lang="vim"><span style="color:#b48ead;">%</span><span>! prettier --parser=php --brace-style=</span><span style="color:#d08770;">1</span><span>tbs
</span></code></pre>
<p>More details can be found in the
<a href="https://github.com/prettier/plugin-php#configuration">Configuration</a>
section of the documentation or via:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">npx</span><span> prettier</span><span style="color:#bf616a;"> --parser</span><span>=php</span><span style="color:#bf616a;"> --help
</span></code></pre>
<p>It looks like modified vimscript is working without much disruptions now.
Enjoy!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://github.com/prettier/plugin-php">https://github.com/prettier/plugin-php</a></li>
<li><a href="https://askubuntu.com/a/719000/350681">https://askubuntu.com/a/719000/350681</a></li>
<li><a href="https://vimhelp.org/cmdline.txt.html#%3A%25">https://vimhelp.org/cmdline.txt.html#%3A%25</a></li>
<li><a href="https://vimhelp.org/various.txt.html#%3A%21">https://vimhelp.org/various.txt.html#%3A%21</a></li>
<li><a href="https://stackoverflow.com/a/6074494/1972509">https://stackoverflow.com/a/6074494/1972509</a></li>
<li><a href="https://stackoverflow.com/a/62976064/1972509">https://stackoverflow.com/a/62976064/1972509</a></li>
<li><a href="https://stackoverflow.com/questions/26051680/display-stdout-in-vim-when-external-command-fails">https://stackoverflow.com/questions/26051680/display-stdout-in-vim-when-external-command-fails</a></li>
<li><a href="https://vi.stackexchange.com/questions/5795/shell-returned-2-when-i-try-to-indent">https://vi.stackexchange.com/questions/5795/shell-returned-2-when-i-try-to-indent</a></li>
<li><a href="https://vi.stackexchange.com/questions/7116/how-can-i-filter-a-buffer-to-an-external-command-on-save-without-causing-any-sid/7118">https://vi.stackexchange.com/questions/7116/how-can-i-filter-a-buffer-to-an-external-command-on-save-without-causing-any-sid/7118</a></li>
<li><a href="https://stackoverflow.com/questions/11703174/abort-write-in-bufwritepre-of-vim-script">https://stackoverflow.com/questions/11703174/abort-write-in-bufwritepre-of-vim-script</a></li>
</ul>
A dead-simple Laravel test watcher2021-07-20T00:00:00+00:002021-07-20T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/dead-simple-laravel-test-watcher/<p>I am spoiled by the many test watchers from the javascript world that do
all the file change watching and polling on your behalf, to rerun the tests
the moment you save a particular file. This feature usually comes out of
the box, especially with the more complex tools like Jest or Cypress.</p>
<p>Trying to do the same thing with PHPUnit, the standard PHP testing
framework used by Laravel too, I found automatic test running on file
change not included as a first-class citizen. I have found multiple
packages that could be installed by composer, buy all of them did not
appeal to me.</p>
<h2 id="i-am-a-script-kiddie">I am a script kiddie</h2>
<p>Yes, StackOverflow copy-paste to the rescue. Again. The dead-simple
solution written in Bash could be found in
<a href="https://stackoverflow.com/a/27447130/1972509">this answer</a> and looks like
this:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#65737e;">#!/bin/bash
</span><span style="color:#b48ead;">while </span><span style="color:#bf616a;">true</span><span>; </span><span style="color:#b48ead;">do
</span><span> </span><span style="color:#bf616a;">FILE</span><span>=$</span><span style="color:#a3be8c;">(</span><span style="color:#bf616a;">inotifywait --exclude</span><span>="</span><span style="color:#a3be8c;">.*.*sw*</span><span>"</span><span style="color:#bf616a;"> --exclude</span><span>="</span><span style="color:#a3be8c;">4913</span><span>"</span><span style="color:#a3be8c;"> ./</span><span style="color:#bf616a;"> --format </span><span>"</span><span style="color:#a3be8c;">%w%f</span><span>"</span><span style="color:#bf616a;"> -e</span><span style="color:#a3be8c;"> close_write) </span><span>&&
</span><span> </span><span style="color:#bf616a;">clear </span><span>&&
</span><span> </span><span style="color:#bf616a;">phpunit --color </span><span>$</span><span style="color:#bf616a;">FILE
</span><span style="color:#b48ead;">done
</span></code></pre>
<p>Kudos for the user
<a href="https://stackoverflow.com/users/992437/tango-bravo">Tango Bravo</a> for
providing it. The author also claims the script is vim compatible, due to
the magic number <code>4913</code> which I do not understand unfortunately. As a
proper script-kiddie, not understanding every part of the script did not
prevent me putting the script into the Laravel project directory, made it
executable and run it.</p>
<h2 id="not-part-of-the-playground-inotifywait">Not part of the playground: inotifywait</h2>
<p>Yeah, expecting something to run on the first time might be foolish:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>./watch.sh: line 3: inotifywait: command not found
</span></code></pre>
<p>If you follow my posts for long enough, where long enough means something
around three months worth of my blogging career, which is hilariously short
time, you probably know
<a href="/blog/comprehensive-guide-pkgfile/">how to determine which package provides the file</a>.
I did exactly that:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">$</span><span> pkgfile inotifywait
</span><span style="color:#bf616a;">extra/bash-completion
</span><span style="color:#bf616a;">community/inotify-tools
</span></code></pre>
<p>A tough choice for a <code>zsh</code> user</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -S</span><span> inotify-tools
</span></code></pre>
<p>The script now runs without <code>command not found</code> complains.</p>
<h2 id="sail-to-the-distant-shores">Sail to the distant shores</h2>
<p>The first optimization that I did with the running watch script was to
change the test command. With the Laravel 8, the Laravel
<a href="https://laravel.com/docs/8.x/sail">Sail</a> is available. I had it running
already, so why not use it?</p>
<pre data-lang="diff" style="background-color:#2b303b;color:#c0c5ce;" class="language-diff "><code class="language-diff" data-lang="diff"><span>#!/bin/bash
</span><span>while true; do
</span><span> FILE=$(inotifywait --exclude=".*.*sw*" --exclude="4913" ./ --format "%w%f" -e close_write) &&
</span><span> clear &&
</span><span style="color:#bf616a;">- phpunit --color $FILE
</span><span style="color:#a3be8c;">+ ./vendor/bin/sail artisan test
</span><span>done
</span></code></pre>
<p>Still, editing any project file deeper in the directory structure did not
trigger the test run. We need to go recursive for that:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#65737e;">#!/bin/bash
</span><span style="color:#b48ead;">while </span><span style="color:#bf616a;">true</span><span>; </span><span style="color:#b48ead;">do
</span><span> </span><span style="color:#bf616a;">FILE</span><span>=$</span><span style="color:#a3be8c;">(</span><span style="color:#bf616a;">inotifywait --recursive --exclude</span><span>="</span><span style="color:#a3be8c;">.*.*sw*</span><span>"</span><span style="color:#bf616a;"> --exclude</span><span>="</span><span style="color:#a3be8c;">4913</span><span>"</span><span style="color:#a3be8c;"> ./</span><span style="color:#bf616a;"> --format </span><span>"</span><span style="color:#a3be8c;">%w%f</span><span>"</span><span style="color:#bf616a;"> -e</span><span style="color:#a3be8c;"> close_write) </span><span>&&
</span><span> </span><span style="color:#bf616a;">clear </span><span>&&
</span><span> </span><span style="color:#bf616a;">./vendor/bin/sail</span><span> artisan test
</span><span style="color:#b48ead;">done
</span></code></pre>
<p>This works reasonably well, but it can get a little bit better still.</p>
<h2 id="focus-please">Focus please</h2>
<p>One problem with the above is that a mere <code>git status</code> triggers the test
run as the command probably writes some file into the <code>.git/</code> directory (I
did not test, but it is the most likely reason). Luckily, we just need a
monitor a few key directories, namely <code>app/</code>, <code>tests/</code>, <code>routes/</code> and
possibly <code>resources/</code>. They can be specified easily:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#65737e;">#!/bin/bash
</span><span style="color:#b48ead;">while </span><span style="color:#bf616a;">true</span><span>; </span><span style="color:#b48ead;">do
</span><span> </span><span style="color:#bf616a;">FILE</span><span>=$</span><span style="color:#a3be8c;">(</span><span style="color:#bf616a;">inotifywait --recursive --exclude</span><span>="</span><span style="color:#a3be8c;">.*.*sw*</span><span>"</span><span style="color:#bf616a;"> --exclude</span><span>="</span><span style="color:#a3be8c;">4913</span><span>"</span><span style="color:#a3be8c;"> ./app ./tests ./routes ./resources/</span><span style="color:#bf616a;"> --format </span><span>"</span><span style="color:#a3be8c;">%w%f</span><span>"</span><span style="color:#bf616a;"> -e</span><span style="color:#a3be8c;"> close_write) </span><span>&&
</span><span> </span><span style="color:#bf616a;">clear </span><span>&&
</span><span> </span><span style="color:#bf616a;">./vendor/bin/sail</span><span> artisan test
</span><span style="color:#b48ead;">done
</span></code></pre>
<p>I keep this script running on the side monitor. Currently it works well
enough. Happy testing!</p>
Finally understood git reset2021-07-19T00:00:00+00:002021-07-19T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/finally-understood-git-reset/<p>Git is a rather beefy tool,
<a href="https://stackoverflow.com/questions/11719013/how-many-commands-does-git-have">boasting up to 150 subcommands</a>,
with the exact figure varying depending on the git version. Using the
method from the thread on my machine:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">$</span><span> git help</span><span style="color:#bf616a;"> -a </span><span>| </span><span style="color:#bf616a;">grep </span><span>"</span><span style="color:#a3be8c;">^ </span><span>" | </span><span style="color:#bf616a;">wc -l
</span><span style="color:#bf616a;">144
</span></code></pre>
<p>The 144 subcommands currently, however skewed the above metric might be, is
still impressive. But I cannot say I like git. Sure, I use it almost every
single day, even for tasks that are probably better handled by another
tool, but still, it's a giant.</p>
<h2 id="subcommands-are-not-the-whole-picture">Subcommands are not the whole picture</h2>
<p>You read that right and if you used git for just a little bit, you are
probably aware of the fact, that basically all these subcommands have their
own arguments and here's where all the chaos begins. I am not going to name
them all, but you know I am talking about <code>git add -A</code>, <code>git commit -a</code>,
<code>git commit -m</code>, <code>git branch -r</code> or <code>git branch -m</code> to name just a few.</p>
<p>So what's up with reset? Git reset is another such subcommands, that offer
multiple arguments that are used really often. Specifically it is
<code>git reset --hard</code>, that I think about as a kind of time machine for
traveling back in time (in a commit history sort of time) and
<code>git reset --soft</code> that I found myself using usually after some bad amend.
Your mileage may vary.</p>
<p>Anyway, both these commands stem from their parent, the mighty <code>git reset</code>.
How come I never used this command without arguments before? It is such a
basic command, tied to the very core of the git itself. Well I finally
crossed this line too. I successfully used (and understood) <code>git reset</code> to
split previous commit into two separate ones! It really feels as ticking
just another box in the programmers lifelong TODO list.</p>
A recent css-loader in Laravel Breeze problem2021-07-18T00:00:00+00:002021-07-18T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/recent-css-loader-laravel-breeze-problem/<p>Installing Laravel Breeze following its documentation yesterday made a
sudden turn downwards, as I did not expect that to happen in the Laravel
ecosystem. However, the problem is connected to the javascript and npm side
of the coin, where hoping for the best and expecting the worst is the
sanest approach, lets look at what happened.</p>
<h2 id="laravel-breeze">Laravel Breeze</h2>
<p>From the
<a href="https://laravel.com/docs/8.x/starter-kits#laravel-breeze-installation">documentation</a>:</p>
<blockquote>
<p>Laravel Breeze is a minimal, simple implementation of all of Laravel's
authentication features, including login, registration, password reset,
email verification, and password confirmation. Laravel Breeze's default
view layer is made up of simple Blade templates styled with Tailwind CSS.
Breeze provides a wonderful starting point for beginning a fresh Laravel
application.</p>
</blockquote>
<p>Might come handy. Steps to reproduce the problem at this point, assuming
the app is running via Sail:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">composer</span><span> require laravel/breeze</span><span style="color:#bf616a;"> --dev
</span><span style="color:#bf616a;">php</span><span> artisan breeze:install
</span><span style="color:#bf616a;">pnpm</span><span> install
</span></code></pre>
<p>Up to this point, no problems appear. The command where the problems start:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pnpm</span><span> run dev
</span></code></pre>
<p>The script asks the following:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>We will use "pnpm" to install the CLI via "pnpm install -D webpack-cli".
</span><span>Do you want to install 'webpack-cli' (yes/no): yes
</span></code></pre>
<p>Answering "yes" the command however hangs after a second:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>ERROR in /js/app
</span><span>Module not found: Error: Can't resolve 'babel-loader' in '/home/peterbabic/work/laravel-app'
</span><span>
</span><span>ERROR in /js/app
</span><span>Module not found: Error: Can't resolve 'css-loader' in '/home/peterbabic/work/laravel-app'
</span></code></pre>
<p>Install the mentioned dev dependencies and start over:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pnpm</span><span> install</span><span style="color:#bf616a;"> -D</span><span> babel-loader css-loader
</span><span style="color:#bf616a;">pnpm</span><span> run dev
</span></code></pre>
<p>After another "yes", this time the command hang with a different missing
package:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>ERROR in /js/app
</span><span>Module not found: Error: Can't resolve 'postcss-loader' in '/home/peterbabic/work/laravel-app'
</span></code></pre>
<p>I thought that installing this one the same way as the above two would
suffice, lets look at it:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pnpm</span><span> install</span><span style="color:#bf616a;"> -D</span><span> postcss-loader
</span><span style="color:#bf616a;">pnpm</span><span> run dev
</span></code></pre>
<p>Nope. The error now is cryptic:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>ValidationError: Invalid options object. CSS Loader has been initialized using an options object that does not match the API schema.
</span><span> - options.url should be one of these:
</span><span> boolean | object { filter? }
</span><span> -> Allows to enables/disables `url()`/`image-set()` functions handling.
</span><span> -> Read more at https://github.com/webpack-contrib/css-loader#url
</span><span> Details:
</span><span> * options.url should be a boolean.
</span><span> * options.url should be an object:
</span><span> object { filter? }
</span></code></pre>
<p>What to do now?</p>
<h2 id="solution">Solution</h2>
<p>After some searching, I have found that other people
<a href="https://laravelquestions.com/2021/07/17/laravel-validationerror-in-css-loader-using-npm-run-prod-webpack/">recently experienced this problem too</a>.
Without a documented solution, I tried downgrading the <code>css-loader</code>
package. Looking into <code>package.json</code>, it was at the quite freshly bumped
major version <code>^6.0.0</code>, hinting at a possible problem. Edit <code>package.json</code>
either manually:</p>
<pre data-lang="diff" style="background-color:#2b303b;color:#c0c5ce;" class="language-diff "><code class="language-diff" data-lang="diff"><span>"devDependencies": {
</span><span> "@tailwindcss/forms": "^0.2.1",
</span><span> "alpinejs": "^2.7.3",
</span><span> "autoprefixer": "^10.1.0",
</span><span> "axios": "^0.21",
</span><span> "babel-loader": "^8.2.2",
</span><span style="color:#bf616a;">- "css-loader": "^6.0.0",
</span><span style="color:#a3be8c;">+ "css-loader": "^5.0.0",
</span><span> "laravel-mix": "^6.0.6",
</span><span> "lodash": "^4.17.19",
</span><span> "postcss": "^8.2.1",
</span><span> "postcss-import": "^12.0.1",
</span><span> "postcss-loader": "^6.1.1",
</span><span> "tailwindcss": "^2.0.2",
</span><span> "webpack-cli": "^4.7.2"
</span><span>}
</span></code></pre>
<p>Or via command a line (manual edit above preferred here) and start over one
more time:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pnpm</span><span> install css-loader@5 </span><span style="color:#65737e;"># <-- skip this when editing manually
</span><span style="color:#bf616a;">pnpm</span><span> run dev
</span></code></pre>
<p>The webpack compiles Laravel Mix successfully now. There are still some
complains about missing peer dependencies:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span> WARN css-loader@5.2.7 requires a peer of webpack@^4.27.0 || ^5.0.0 but none was installed.
</span><span> WARN babel-loader@8.2.2 requires a peer of @babel/core@^7.0.0 but none was installed.
</span><span> WARN babel-loader@8.2.2 requires a peer of webpack@>=2 but none was installed.
</span><span> WARN laravel-mix > webpack-cli: @webpack-cli/serve@1.5.1 requires a peer of webpack-dev-server@* but version 4.0.0-beta.3 was installed.
</span><span> WARN laravel-mix: webpack-cli@4.7.2 requires a peer of webpack-dev-server@* but version 4.0.0-beta.3 was installed.
</span><span> WARN 4 other warnings
</span></code></pre>
<p>But overall it works without any other symptoms so far. Hopefully I now
find some proper place to report this issue.</p>
Fighting Docker iptables on Arch2021-07-17T00:00:00+00:002021-07-18T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/fighting-docker-iptables-on-arch/<p>A strange issue I could not find a meaningful explanation anywhere
regarding running docker-compose script and iptables firewall on Arch
Linux. Steps to reproduce assume bare iptables, Docker and docker-compose
available.</p>
<h3 id="step-1-start-docker">Step 1. Start Docker</h3>
<p>Start the <code>docker.service</code> via <code>systemctl</code>.</p>
<h3 id="step-2-start-iptables">Step 2. Start iptables</h3>
<p>Start the <code>iptables.service</code>, with contents shipped with the package:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#65737e;"># Empty iptables rule file
</span><span style="color:#bf616a;">*filter
</span><span style="color:#bf616a;">:INPUT</span><span> ACCEPT </span><span style="color:#b48ead;">[</span><span>0:0</span><span style="color:#b48ead;">]
</span><span style="color:#bf616a;">:FORWARD</span><span> ACCEPT </span><span style="color:#b48ead;">[</span><span>0:0</span><span style="color:#b48ead;">]
</span><span style="color:#bf616a;">:OUTPUT</span><span> ACCEPT </span><span style="color:#b48ead;">[</span><span>0:0</span><span style="color:#b48ead;">]
</span><span style="color:#bf616a;">COMMIT
</span></code></pre>
<h3 id="step-3-run-a-docker-compose-script">Step 3. Run a docker-compose script</h3>
<p>Now run a docker-compose script. I have tried at least four unrelated and
every single one triggered the error. Try for example
<a href="https://github.com/mastodon/mastodon/blob/main/docker-compose.yml">this</a>
one:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> docker-compose up</span><span style="color:#bf616a;"> -d
</span></code></pre>
<p>The error manifests itself in the following manner almost instantly:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>ERROR: Failed to Setup IP tables: Unable to enable DROP INCOMING rule: (iptables failed: iptables --wait -I DOCKER-ISOLATION-STAGE-1 -i br-739fd632de27 ! -d 172.18.0.0/16 -j DROP: iptables: No chain/target/match by that name.
</span><span> (exit status 1))
</span></code></pre>
<p>And the services are not started. For the record, here are the versions:</p>
<ul>
<li>iptables v1.8.7 (legacy)</li>
<li>Docker version 20.10.7, build f0df35096d</li>
<li>docker-compose docker-compose version 1.29.2, build unknown</li>
</ul>
<p>The problem also happens on multiple versions running Arch.</p>
<h2 id="a-solution">A solution</h2>
<p>There are many threads around the Internet for the above error message and
the solution is to stop iptables and restart Docker:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> systemctl stop iptables.service
</span><span style="color:#bf616a;">sudo</span><span> systemctl restart docker.service
</span></code></pre>
<p>Docker flushes iptables rules and re-creates them when restarting. With the
iptables no longer running, the docker-compose script now starts without
problems.</p>
<h2 id="why-is-this-a-problem">Why is this a problem?</h2>
<p>I have yet to find a solution for a simple to use firewall for
docker-compose run services. Really, this is a long-standing unresolved
problem for Docker, further confirmed by the amount of people asking for a
reliable solution that works with <code>ufw</code> (Uncomplicated FireWall) in
<a href="https://github.com/docker/for-linux/issues/777">#777</a> and
<a href="https://github.com/docker/for-linux/issues/690">#690</a> among others.</p>
<p>Now since I cannot reliably work with ufw, and cannot work with bare
iptables either (no matter how archaic it's ruleset system is), how can I
set the firewall? I honestly cannot wrap my head around this.</p>
<p>Many people say they already gave up the fight against Docker and went over
to Podman for most of their needs in this area, not to mention Podman is
designed to work rootless from the ground up. Hopefully I will be able to
experiment with Podman soon, but for now I definitely cannot afford that.</p>
<h3 id="update-18-july-2021">Update 18-July-2021</h3>
<p>As
<a href="https://nolineage.com/notice/A9NK3eZIHhNx6WIXMu">user MindOfJoe correctly pointed out</a>,
enabling both services and not starting them ad-hoc should provide the
right result. Specifically the <code>docker.service</code> has to be started <em>after</em>
<code>iptables.service</code>. Inspecting the dependency graph confirms that this
problem is not really a problem and systemd takes care of the right order
at boot:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">$</span><span> systemctl-analyze critical-chain docker.service
</span><span>
</span><span style="color:#bf616a;">docker.service</span><span> +6.440s
</span><span style="color:#bf616a;">└─network-online.target</span><span> @15.972s
</span><span> </span><span style="color:#bf616a;">└─systemd-networkd-wait-online.service</span><span> @2.156s +13.815s
</span><span> </span><span style="color:#bf616a;">└─systemd-networkd.service</span><span> @2.073s +80ms
</span><span> </span><span style="color:#bf616a;">└─network-pre.target</span><span> @2.029s
</span><mark style="background-color:#65737e30;"><span> </span><span style="color:#bf616a;">└─iptables.service</span><span> @1.359s +669ms
</span></mark><span> </span><span style="color:#bf616a;">└─basic.target</span><span> @1.352s
</span><span> </span><span style="color:#bf616a;">└─sockets.target</span><span> @1.352s
</span><span> </span><span style="color:#bf616a;">└─docker.socket</span><span> @1.348s +4ms
</span><span> </span><span style="color:#bf616a;">└─sysinit.target</span><span> @1.344s
</span><span> </span><span style="color:#bf616a;">└─systemd-update-utmp.service</span><span> @1.331s +13ms
</span><span> </span><span style="color:#bf616a;">└─systemd-tmpfiles-setup.service</span><span> @1.209s +67ms
</span><span> </span><span style="color:#bf616a;">└─local-fs.target</span><span> @1.207s
</span><span> </span><span style="color:#bf616a;">└─run-docker-netns-1d291c7c6a2b.mount</span><span> @20.223s
</span><span> </span><span style="color:#bf616a;">└─local-fs-pre.target</span><span> @583ms
</span><span> </span><span style="color:#bf616a;">└─systemd-tmpfiles-setup-dev.service</span><span> @523ms
</span><span> </span><span style="color:#bf616a;">└─kmod-static-nodes.service</span><span> @480ms +32ms
</span><span> </span><span style="color:#bf616a;">└─systemd-journald.socket</span><span> @469ms
</span><span> </span><span style="color:#bf616a;">└─system.slice</span><span> @411ms
</span><span> </span><span style="color:#bf616a;">└─-.slice</span><span> @411ms
</span></code></pre>
<p>Since the services are correctly positioned in a dependency graphs, there
is no risk of a race condition when the <code>docker.service</code> would be started
before <code>iptables.service</code>, flushing Docker rules, leading an erratic
services malfunction after some reboots. Good to know that things like
these can be verified easily if you know where to look.</p>
Install Nextcloud with OnlyOffice and Postgres2021-07-16T00:00:00+00:002021-07-16T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/install-nextcloud-onlyoffice-postgres/<p>In the previous
<a href="/blog/install-nextcloud-onlyoffice-docker-compose/">short article about installing Nextcloud</a>
I did not provide much details, apart from port configuration (that might
even be needed depending on other factors). The setup I chose just worked.
However, I too found the basic SQLite database performance a little bit
lacking and decided to use a PostgreSQL database instead, as a fresh
install.</p>
<p>For a PostgreSQL under docker-compose I've used the same steps described in
my
<a href="/blog/running-mastodon-with-docker-compose/">guide for installing Mastodon with docker-compose</a>.
It works there, I do not understand the drawbacks yet, so why come up with
something wildly different? Here's a condensed list of steps, explanations
are in the above link.</p>
<h3 id="step-1-download-a-repository">Step 1. Download a repository</h3>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#96b5b4;">cd</span><span> /home/user
</span><span style="color:#bf616a;">git</span><span> clone https://github.com/onlyoffice/docker-onlyoffice-nextcloud
</span><span style="color:#bf616a;">mv</span><span> docker-onlyoffice-nextcloud nextcloud
</span><span style="color:#96b5b4;">cd</span><span> nextcloud
</span></code></pre>
<p>The name of a directory would matter later, I prefer just a short
<code>nextcloud</code>.</p>
<h3 id="step-2-prepare-a-database-container">Step 2. Prepare a database container</h3>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> docker pull postgres:12.5-alpine
</span><span>
</span><span style="color:#bf616a;">sudo</span><span> docker run</span><span style="color:#bf616a;"> --name</span><span> postgres12 \
</span><span style="color:#bf616a;"> -v</span><span> /YOUR/NEXTCLOUD/LOCATION/postgres:/var/lib/postgresql/data \
</span><span style="color:#bf616a;"> -e</span><span> POSTGRES_PASSWORD=password</span><span style="color:#bf616a;"> --rm -d</span><span> postgres:12.5-alpine
</span><span>
</span><span style="color:#bf616a;">sudo</span><span> docker exec</span><span style="color:#bf616a;"> -it</span><span> postgres12 psql</span><span style="color:#bf616a;"> -U</span><span> postgres
</span><span>> CREATE </span><span style="color:#bf616a;">USER</span><span> nextcloud WITH PASSWORD '</span><span style="color:#a3be8c;">password</span><span>' CREATEDB;
</span><span>> exit
</span><span>
</span><span style="color:#bf616a;">sudo</span><span> docker stop postgres12
</span></code></pre>
<p>Please choose a different password in the two commands above and adjust the
location to the one where <code>docker-compose.yml</code> is located.</p>
<h3 id="step-3-first-start">Step 3. First start</h3>
<p>Edit the <code>docker-compose.yml</code> and add the database section near the top:</p>
<pre data-lang="yaml" style="background-color:#2b303b;color:#c0c5ce;" class="language-yaml "><code class="language-yaml" data-lang="yaml"><span style="color:#bf616a;">db</span><span>:
</span><span> </span><span style="color:#bf616a;">restart</span><span>: </span><span style="color:#a3be8c;">always
</span><span> </span><span style="color:#bf616a;">image</span><span>: </span><span style="color:#a3be8c;">postgres:12.5-alpine
</span><span> </span><span style="color:#bf616a;">shm_size</span><span>: </span><span style="color:#a3be8c;">256mb
</span><span> </span><span style="color:#bf616a;">healthcheck</span><span>:
</span><span> </span><span style="color:#bf616a;">test</span><span>: ["</span><span style="color:#a3be8c;">CMD</span><span>", "</span><span style="color:#a3be8c;">pg_isready</span><span>", "</span><span style="color:#a3be8c;">-U</span><span>", "</span><span style="color:#a3be8c;">postgres</span><span>"]
</span><span> </span><span style="color:#bf616a;">volumes</span><span>:
</span><span> - </span><span style="color:#a3be8c;">./postgres:/var/lib/postgresql/data
</span></code></pre>
<p>The volume path above has to be the same as the one in the <code>docker run</code>
above, although a relative one like here serves well. Also, make the <code>app</code>
service depend on the <code>db</code> service:</p>
<pre data-lang="yaml" style="background-color:#2b303b;color:#c0c5ce;" class="language-yaml "><code class="language-yaml" data-lang="yaml"><span style="color:#bf616a;">app</span><span>:
</span><span> </span><span style="color:#65737e;"># contaner_name: app-server
</span><span> </span><span style="color:#65737e;"># ...
</span><span> </span><span style="color:#bf616a;">depends_on</span><span>:
</span><span> - </span><span style="color:#a3be8c;">db
</span></code></pre>
<p>With the <code>docker-compose.yml</code> ready, start the script:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> docker-compose up</span><span style="color:#bf616a;"> -d
</span></code></pre>
<h3 id="step-4-setup-a-reverse-proxy">Step 4. Setup a reverse proxy</h3>
<p>Set up a reverse proxy, however you see fit. Again, inspirations can be
found in the posts under tags <a href="/tags/nginx">Nginx</a> and especially
<a href="/tags/acme">acme.sh</a>. The important bit is the exposed HTTP port 80, a
line 37 in the source code
<a href="https://github.com/peterbabic/sources-peterbabic.dev/blob/bb8183398bee155e9827effa37e4f1a56fb62acc/install-nextcloud-onlyoffice-postgres/docker-compose.yml#L37">example</a>.
This is what I was surprised about in the post
<a href="/blog/reverse-proxy-behind-reverse-proxy/">Reverse proxy behind a reverse proxy</a>.
Here a port 8081 is where the Nextcloud is listening.</p>
<h3 id="step-5-choose-a-postgresql-database">Step 5. Choose a PostgreSQL database</h3>
<p>Access the site, fill in the admin username and password. Do not change the
Data folder path. Then choose a PostgreSQL database and fill the following:</p>
<p><img src="https://peterbabic.dev/blog/install-nextcloud-onlyoffice-postgres/nextcloud-setup-database.png" alt="A Nextcloud first login interface with the option to choose a database t obe used." /></p>
<p>Here, the same input data in the table, in case the picture is unreadable:</p>
<table><thead><tr><th>Field</th><th>Value</th></tr></thead><tbody>
<tr><td>User</td><td>nextcloud</td></tr>
<tr><td>Password</td><td>password</td></tr>
<tr><td>Database name</td><td>nextcloud</td></tr>
<tr><td>Host name</td><td>nextcloud_db_1</td></tr>
</tbody></table>
<p>Then click "Finish" at the bottom.</p>
<h3 id="step-6-configure-onlyoffice">Step 6. Configure OnlyOffice</h3>
<p>The last step is to setup the OnlyOffice, which to me
<a href="/blog/onlyoffice-proved-to-be-useful/">already proved to be a very useful tool</a>
overall. Run the <code>set_configuration.sh</code> script from the repository:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> set_configuration.sh
</span></code></pre>
<p>Now setup the reverse proxy and access the web interface.</p>
<h2 id="upgrading">Upgrading</h2>
<p>I was able to upgrade the stack by simply doing the following:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">docker-compose</span><span> pull
</span></code></pre>
<p>And then restart the composition:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">docker-compose</span><span> down && </span><span style="color:#bf616a;">docker-compose</span><span> up</span><span style="color:#bf616a;"> -d
</span></code></pre>
<p>Enjoy!</p>
Another way to combine local repositories2021-07-15T00:00:00+00:002021-07-15T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/another-way-to-combine-local-repositories/<p>A quick and dirty way I usually combine private repositories is to use the
<code>--rebase</code> option for <code>git pull</code>. I have written about such option already
in a post about
<a href="/blog/keep-git-fork-sync/">keeping git fork in sync with the upstream</a>.
Here's how to do it:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> remote add</span><span style="color:#bf616a;"> --fetch</span><span> other ../other-repository
</span><span style="color:#bf616a;">git</span><span> pull</span><span style="color:#bf616a;"> --rebase</span><span> other main
</span></code></pre>
<p>This is especially good if there are little to no conflicts to be resolved.
Another advantage is that it puts commits from that other repository at the
bottom of the history, so your recent work is in the same place, at least
visually in a git log. The remote can also be safely removed now:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> remote remove other
</span></code></pre>
<p>The obvious disadvantage of combining repositories this way is that it
rewrites git history. Even though the latest commits look unaffected, their
hashes has changed. This means that using this method for anything other
than local private repositories is discouraged. Consider it to simply be a
fast unobtrusive alternative to your local work.</p>
<h2 id="preserving-a-chronological-commit-order">Preserving a chronological commit order</h2>
<p>Combining two unrelated repositories into one while maintaining the commit
history in a chronological order is something I tried to search up multiple
times already, yet the solutions offered are usually these two:</p>
<ol>
<li>Merge on top and then cherry-pick commits</li>
<li>Merge on top and then rebase interactively</li>
</ol>
<p>However both solutions require a lot of manual work and are error prone.
There are odiously some better or worse
<a href="https://stackoverflow.com/a/34861819/1972509">scripts that do this automatically</a>
and they can be quite effective, here's a summary of the link for the
record:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> init
</span><span style="color:#bf616a;">git</span><span> remote add</span><span style="color:#bf616a;"> --fetch</span><span> repoA ../repoA
</span><span style="color:#bf616a;">git</span><span> remote add</span><span style="color:#bf616a;"> --fetch</span><span> repoB ../repoB
</span><span style="color:#65737e;"># The magic
</span><span style="color:#bf616a;">git</span><span> log</span><span style="color:#bf616a;"> --all --oneline --format</span><span>="</span><span style="color:#a3be8c;">%at %H</span><span>" | \
</span><span> </span><span style="color:#bf616a;">sort </span><span>| </span><span style="color:#bf616a;">cut -c12- </span><span>| </span><span style="color:#bf616a;">xargs -I </span><span>{} sh</span><span style="color:#bf616a;"> -c </span><span>\
</span><span> '</span><span style="color:#a3be8c;">git format-patch -1 {} --stdout | git am --committer-date-is-author-date</span><span>'
</span></code></pre>
<p>This had worked for me as well. Use with care as it too rewrites git
history!</p>
Enable query stats in Mastodon with postgres2021-07-14T00:00:00+00:002021-07-14T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/enable-query-stats-mastodon-postgres/<p>Clicking pghero under Mastodon Administration menu, the interface shows the
<code>Query stats must be enabled for slow queries</code> orange warning, in the
interface that looks like this:</p>
<p><img src="https://peterbabic.dev/blog/enable-query-stats-mastodon-postgres/query-stats-postgres-button.png" alt="A screenshot of the pghero interface notifying the user that Query stats must be enabled for slow queries and that Query stats are available but not enabled with the Enable button beneath the message." /></p>
<p>After clicking the blue Enable button, instead of a success, the error
<code> The database user does not have permission to enable query stats</code> is
shown in the top row or the interface as a red ribbon:</p>
<p><img src="https://peterbabic.dev/blog/enable-query-stats-mastodon-postgres/query-stats-no-permissions.png" alt="The database user does not have permission to enable query stats" /></p>
<p>A small hint toward resolving this error could be found in the
<a href="https://github.com/ankane/pghero/issues/7#issuecomment-51527690">pghero#7</a>
which mentions running a SQL statement under user <code>postgres</code> like this:</p>
<pre data-lang="sql" style="background-color:#2b303b;color:#c0c5ce;" class="language-sql "><code class="language-sql" data-lang="sql"><span>CREATE extension pg_stat_statements;
</span></code></pre>
<p>To run such a command, establish an access to the postgres database first.
If running
<a href="/blog/running-mastodon-with-docker-compose/">Mastodon under docker-compose</a>
navigate to the directory where <code>docker-compose.yml</code> is located and run the
following:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> docker exec</span><span style="color:#bf616a;"> -it</span><span> mastodon_db_1 psql</span><span style="color:#bf616a;"> -h</span><span> localhost</span><span style="color:#bf616a;"> -U</span><span> postgres
</span></code></pre>
<p>When in <code>psql</code>, run these commands:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>postgres=# \c mastodon;
</span><span>You are now connected to database "mastodon" as user "postgres".
</span><span>mastodon=# CREATE extension pg_stat_statements;
</span><span>CREATE EXTENSION
</span><span>mastodon=# exit
</span></code></pre>
<p>Refreshing pghero interface nows outputs a verbose message
<code>Make Query Stats available by adding the following lines to postgresql.conf</code>
and then restart the server for changes to take effect:</p>
<p><img src="https://peterbabic.dev/blog/enable-query-stats-mastodon-postgres/query-stats-postgres-conf.png" alt="A screenshot of the pghero interface displaying a hint about configuring postgres to enable query statistics." /></p>
<p>Still assuming the above
<a href="/blog/running-mastodon-with-docker-compose/">guide</a>, the <code>postgres/</code>
folder is in the same one as <code>docker-completely.yml</code>. Edit the file
<code>postgres/postgresql.conf</code> there and add/uncomment the exact same lines
from the screenshot above:</p>
<pre data-lang="conf" style="background-color:#2b303b;color:#c0c5ce;" class="language-conf "><code class="language-conf" data-lang="conf"><span style="color:#bf616a;">shared_preload_libraries </span><span>= </span><span style="color:#a3be8c;">'pg_stat_statements'
</span><span style="color:#bf616a;">pg_stat_statements</span><span>.track = all
</span></code></pre>
<p>I have put these under the sections CLIENT CONNECTION DEFAULTS and
STATISTICS respectively. The <code>shared_preload_libraries</code> have a comment
there stating also <code># (change requires restart)</code>, so do the following:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> docker-compose down
</span><span style="color:#bf616a;">sudo</span><span> docker-compose up</span><span style="color:#bf616a;"> -d
</span></code></pre>
<p>Refreshing pghero interface now shows a green success line stating
<code>No slow queries</code> instead:</p>
<p><img src="https://peterbabic.dev/blog/enable-query-stats-mastodon-postgres/query-stats-ok.png" alt="No slow queries text in green background" /></p>
<p>Although I am not sure at this point what is this configuration actually
good for, I have found out how to get rid of the warning and here's the
guide. Enjoy!</p>
Running Mastodon with docker-compose2021-07-13T00:00:00+00:002021-07-13T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/running-mastodon-with-docker-compose/<p>A minimal set of commands I had to do to successfully run Mastodon via
docker-compose on the VPS. Many OS specific configurations are omitted, as
I decided to use Arch on this VPS as well, which is not what most people
choose for their server environment, at least not when
<a href="https://www.reddit.com/r/archlinux/comments/hezb1c/arch_on_the_server/fvufefz">caveats</a>
of such choice play a major role.</p>
<h2 id="getting-started">Getting started</h2>
<p>Clone the Mastodon repository. It contains a <code>docker-compose.yml</code> file as
well as other files directly or indirectly referenced by it (for example
<code>package.json</code> or <code>yarn.lock</code>):</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> clone https://github.com/mastodon/mastodon.git
</span><span style="color:#96b5b4;">cd</span><span> mastodon
</span></code></pre>
<p>Now here's what I occasionally do to help me keep track of the changes to
the configuration files easily. Make a branch on a given tag, which at the
time of writing was <code>v3.4.1</code>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> checkout v3.4.1</span><span style="color:#bf616a;"> -b</span><span> v3.4.1-branch
</span></code></pre>
<p>Without creating a branch, the HEAD would be in a detached state (pointing
at a tagged commit, not a branch), It would still track changes, but these
would not be accessible after another checkout.</p>
<p><strong>Tip:</strong> to
<a href="https://devconnected.com/how-to-checkout-git-tags/">get the latest available tag easily</a>,
you can use <code>git rev-list</code> as follows:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> fetch</span><span style="color:#bf616a;"> --all --tags
</span><span style="color:#bf616a;">git</span><span> describe</span><span style="color:#bf616a;"> --tags </span><span>`</span><span style="color:#bf616a;">git</span><span> rev-list</span><span style="color:#bf616a;"> --tags --max-count</span><span>=1`
</span></code></pre>
<p>Also consider changing the mastodon image to some tagged version. In the
section <code>web</code> replace <code>mastodon:latest</code> image with the tagged one:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>image: tootsuite/mastodon:v3.4.1
</span></code></pre>
<p>It is useful for referencing and searching for issues, should some arise,
at the very least. Even more important to me is that it requires a manual
intervention to bump a version number, so things won't suddenly change when
the docker-compose script get restarted without you understanding why. It
is overall a good practice to avoid unnecessary surprises.</p>
<h2 id="postgres-database">Postgres database</h2>
<p>The referenced version of postgres in the docker-compose file is
<code>9.6-alpine</code>. This might work, but I tested with <code>12.5-alpine</code> instead and
found no problems so far, so I changes to this version under the <code>db</code>
section:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>image: postgres:12.5-alpine
</span></code></pre>
<p>Start the container to setup the user, assuming the path to the
docker-compose file is <code>/home/mastodon/mastodon/docker-compose.yml</code>. If
not, modify the path so the <code>postgres</code> volume folder matches it. Consider
setting a custom password:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> docker run</span><span style="color:#bf616a;"> --name</span><span> postgres12</span><span style="color:#bf616a;"> -v</span><span> /home/mastodon/mastodon/postgres:/var/lib/postgresql/data</span><span style="color:#bf616a;"> -e</span><span> POSTGRES_PASSWORD=password</span><span style="color:#bf616a;"> --rm -d</span><span> postgres:12.5-alpine
</span></code></pre>
<p>Create a mastodon database user, use the password from above:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> docker exec</span><span style="color:#bf616a;"> -it</span><span> postgres12 psql</span><span style="color:#bf616a;"> -U</span><span> postgres
</span><span>> CREATE </span><span style="color:#bf616a;">USER</span><span> mastodon WITH PASSWORD '</span><span style="color:#a3be8c;">password</span><span>' CREATEDB;
</span><span>> exit
</span><span style="color:#bf616a;">sudo</span><span> docker stop postgres12
</span></code></pre>
<p>This makes database setup complete.</p>
<h2 id="set-up-mastodon">Set up Mastodon</h2>
<p>This part is a little bit tricky, as it took me the most time to figure out
right:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> docker-compose run</span><span style="color:#bf616a;"> --rm</span><span> web bundle exec rake mastodon:setup
</span></code></pre>
<p>Fill the domain name you intend to run the instance. This one is probably
hard to change once the instance is running. Fill the next questions
according to the table below:</p>
<table><thead><tr><th>Question</th><th>Type in</th></tr></thead><tbody>
<tr><td>Do you want to enable single user mode?</td><td><strong>No</strong></td></tr>
<tr><td>Are you using Docker to run Mastodon?</td><td><strong>Yes</strong></td></tr>
<tr><td>PostgreSQL host:</td><td><strong>mastodon_db_1</strong></td></tr>
<tr><td>PostgreSQL port:</td><td><strong>5432</strong></td></tr>
<tr><td>Name of PostgreSQL database:</td><td><strong>mastodon</strong></td></tr>
<tr><td>Name of PostgreSQL user:</td><td><strong>mastodon</strong></td></tr>
<tr><td>Password of ProstgreSQL user:</td><td><strong>password</strong></td></tr>
</tbody></table>
<p>The above part should look like this in the terminal:</p>
<p><img src="https://peterbabic.dev/blog/running-mastodon-with-docker-compose/mastodon_setup_1.png" alt="Single user mode? N, Using Docker to run Mastodon? Y, PostgreSQL host: mastodon_db_1, PostgreSQL port: 5432, Name of PostgreSQL database: mastodon, Name of PostgreSQL user: mastodon, Password of ProstgreSQL user: password" /></p>
<p>The setup then continues with email capabilities configuration questions. I
am omitting details for this part, as my email
<a href="/blog/setting-up-smtp-in-mastodon/">provider required different SMTP settings</a>,
some of which were not offered via this setup wizard. I have not found a
reliable way to send a test email from the UI or the console later, so it
might be worth trying here to get the emails sent out. Setting up cloud
storage or email capabilities can be also safely skipped now and configured
later, if you wish to do so, use these options:</p>
<table><thead><tr><th>Question</th><th>Type in</th></tr></thead><tbody>
<tr><td>Do you want to store uploaded files on the cloud?</td><td><strong>No</strong></td></tr>
<tr><td>Do you want to send e-mails from localhost?</td><td><strong>Yes</strong></td></tr>
<tr><td>E-mail address to send e-mails "from":</td><td>Enter</td></tr>
<tr><td>Send a test e-mail with this configuration right now?</td><td><strong>No</strong></td></tr>
<tr><td>Save configuration?</td><td><strong>Yes</strong></td></tr>
</tbody></table>
<p>Your terminal should resemble this:</p>
<p><img src="https://peterbabic.dev/blog/running-mastodon-with-docker-compose/mastodon_setup_2.png" alt="Store uploaded files in cloud: No, Send e-mails from localhost: Yes, Send e-mail "from": Enter, Send a test email now: No, Save configuration: Yes" /></p>
<p>The terminal then outputs the configuration, including secret keys. Copy
and paste it into <code>.env.production</code> file in the cloned repository already
containing <code>postgres/</code> directory and <code>docker-compose.yml</code> file, among
others.</p>
<p>The last part is to migrate the database and create an admin account.
Answer <strong>Yes</strong> to both and proceed. The Mastodon instance admin user
password will be generated and displayed, make sure to not lose it! If you
lose it before logging in successfully, one way to obtain it again is to
delete <code>postgres/</code> folder and start over from the
<a href="https://peterbabic.dev/blog/running-mastodon-with-docker-compose/#postgres-database">Postgres database</a> step above.</p>
<h2 id="full-text-search">Full-text search</h2>
<p>This step is optional, although it is a nice addition to have a full-text
search provided via ElasticSearch available. Edit the <code>docker-compose.yml</code>
and uncomment two <code>es</code> related blocks:</p>
<pre data-lang="yaml" style="background-color:#2b303b;color:#c0c5ce;" class="language-yaml "><code class="language-yaml" data-lang="yaml"><span style="color:#65737e;"># es:
</span><span style="color:#65737e;"># restart: always
</span><span style="color:#65737e;"># image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.10
</span><span style="color:#65737e;"># environment:
</span><span style="color:#65737e;"># - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
</span><span style="color:#65737e;"># - "cluster.name=es-mastodon"
</span><span style="color:#65737e;"># - "discovery.type=single-node"
</span><span style="color:#65737e;"># - "bootstrap.memory_lock=true"
</span><span style="color:#65737e;"># networks:
</span><span style="color:#65737e;"># - internal_network
</span><span style="color:#65737e;"># healthcheck:
</span><span style="color:#65737e;"># test:
</span><span style="color:#65737e;"># [
</span><span style="color:#65737e;"># "CMD-SHELL",
</span><span style="color:#65737e;"># "curl --silent --fail localhost:9200/_cluster/health || exit 1",
</span><span style="color:#65737e;"># ]
</span><span style="color:#65737e;"># volumes:
</span><span style="color:#65737e;"># - ./elasticsearch:/usr/share/elasticsearch/data
</span><span style="color:#65737e;"># ulimits:
</span><span style="color:#65737e;"># memlock:
</span><span style="color:#65737e;"># soft: -1
</span><span style="color:#65737e;"># hard: -1
</span><span>
</span><span> </span><span style="color:#bf616a;">web</span><span>:
</span><span> </span><span style="color:#bf616a;">build</span><span>: </span><span style="color:#d08770;">.
</span><span> </span><span style="color:#bf616a;">image</span><span>: </span><span style="color:#a3be8c;">tootsuite/mastodon
</span><span> </span><span style="color:#bf616a;">restart</span><span>: </span><span style="color:#a3be8c;">always
</span><span> </span><span style="color:#bf616a;">env_file</span><span>: </span><span style="color:#a3be8c;">.env.production
</span><span> </span><span style="color:#bf616a;">command</span><span>: </span><span style="color:#a3be8c;">bash -c "rm -f /mastodon/tmp/pids/server.pid; bundle exec rails s -p 3000"
</span><span> </span><span style="color:#bf616a;">networks</span><span>:
</span><span> - </span><span style="color:#a3be8c;">external_network
</span><span> - </span><span style="color:#a3be8c;">internal_network
</span><span> </span><span style="color:#bf616a;">healthcheck</span><span>:
</span><span> </span><span style="color:#bf616a;">test</span><span>: ["</span><span style="color:#a3be8c;">CMD-SHELL</span><span>", "</span><span style="color:#a3be8c;">wget -q --spider --proxy=off localhost:3000/health || exit 1</span><span>"]
</span><span> </span><span style="color:#bf616a;">ports</span><span>:
</span><mark style="background-color:#65737e30;"><span> - "</span><span style="color:#a3be8c;">127.0.0.1:3000:3000</span><span>"
</span></mark><span> </span><span style="color:#bf616a;">depends_on</span><span>:
</span><span> - </span><span style="color:#a3be8c;">db
</span><span> - </span><span style="color:#a3be8c;">redis
</span><span style="color:#65737e;"># - es
</span><span> </span><span style="color:#bf616a;">volumes</span><span>:
</span><span> - </span><span style="color:#a3be8c;">./public/system:/mastodon/public/system
</span></code></pre>
<p>Edit <code>.env.production</code> file and append the following:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">ES_ENABLED</span><span>=</span><span style="color:#a3be8c;">true
</span><span style="color:#bf616a;">ES_HOST</span><span>=</span><span style="color:#a3be8c;">mastodon_es_1
</span><span style="color:#bf616a;">ES_PORT</span><span>=</span><span style="color:#a3be8c;">9200
</span></code></pre>
<p>The instance should now be ready to start.</p>
<h2 id="first-run">First run</h2>
<p>Start the whole stack, this can take a while:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> docker-compose up</span><span style="color:#bf616a;"> -d
</span><span style="color:#bf616a;">sudo</span><span> docker-compose down
</span></code></pre>
<p>This generates other files and folders, consider setting the permissions
for them and start the instance again:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> chown</span><span style="color:#bf616a;"> -R</span><span> 70:70 ./postgres
</span><span style="color:#bf616a;">sudo</span><span> chown</span><span style="color:#bf616a;"> -R</span><span> 991:991 ./public
</span><span style="color:#bf616a;">sudo</span><span> chown</span><span style="color:#bf616a;"> -R</span><span> 1000:1000 ./elascticsearch
</span><span style="color:#bf616a;">sudo</span><span> docker-compose up</span><span style="color:#bf616a;"> -d
</span></code></pre>
<p>Now without any modifications on <code>docker-compose.yml</code> the instance should
be available under the port <code>3000</code>. Configure the reverse proxy of your
choice to terminate the SSL/TLS and to proxy the domain name inserted into
the wizard earlier to this port. You can also find some inspiration about
how to do so in my previous articles under tags <a href="/tags/nginx">Nginx</a> and
especially <a href="/tags/acme">acme.sh</a>, should you choose to use these two to
manage this task and the certificates for you.</p>
<p>To access the web user interface, insert the admin user name and the
password generated earlier, and you are ready to have fun in the fediverse!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://www.linode.com/docs/guides/install-mastodon-on-ubuntu-2004/">https://www.linode.com/docs/guides/install-mastodon-on-ubuntu-2004/</a></li>
<li><a href="https://vdna.be/site/index.php/2020/11/hosting-your-own-mastodon-instance-via-docker-compose/">https://vdna.be/site/index.php/2020/11/hosting-your-own-mastodon-instance-via-docker-compose/</a></li>
<li><a href="https://blog.lumia.pw/2021/04/26/Install%20mastodon%20with%20Docker">https://blog.lumia.pw/2021/04/26/Install%20mastodon%20with%20Docker</a></li>
</ul>
Setting up SMTP in Mastodon2021-07-12T00:00:00+00:002021-07-12T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/setting-up-smtp-in-mastodon/<p>Setting up a Mastodon instance was not so hard. I mean, it took some tries
to get it right finally, but generally I could understand what is happening
and why there is currently a problem all the time during that process.</p>
<p>However, configuring that instance to send emails was a frustrating
experience for me, mainly due to fact that I could not find a reliable
source of error messages. In Mastodon, <code>v3.4.1</code> at the time of writing, the
emails are pushed into <code>sidekiq</code> queues. It's interface is quite polished,
but the error messages did not show up for me, no matter how hard I clicked
around. Having no previous experience with <code>sidekiq</code>, I decided look
elsewhere.</p>
<h2 id="looking-for-a-solution">Looking for a solution</h2>
<p>Search results of about 30 pages shown different SMTP configurations, but
not a single one would enable sending emails from Mastodon instance to my
inbox. Looking into <code>sidekiq</code>, the problem of not sending emails was shown
there by precisely two scenarios, neither particularly helpful:</p>
<ol>
<li>The <strong>Processed</strong> jobs counter incremented, thinking that email was sent</li>
<li>The job got oscillating between <strong>Busy</strong> and <strong>Retries</strong>, knowing there
was a problem sending an email</li>
</ol>
<p>Again, no emails were sent whatsoever. At the time I started becoming too
desperate, I came up with this combination of
<a href="https://docs.joinmastodon.org/admin/config/">configuration options</a>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">SMTP_SERVER</span><span>=</span><span style="color:#a3be8c;">smtp.example.com
</span><span style="color:#bf616a;">SMTP_PORT</span><span>=</span><span style="color:#a3be8c;">465
</span><span style="color:#bf616a;">SMTP_LOGIN</span><span>=</span><span style="color:#a3be8c;">mastodon@peterbabic.dev
</span><span style="color:#bf616a;">SMTP_PASSWORD</span><span>=</span><span style="color:#a3be8c;">very-strong-passphrase-here
</span><span style="color:#bf616a;">SMTP_FROM_ADDRESS</span><span>=</span><span style="color:#a3be8c;">mastodon@peterbabic.dev
</span><span style="color:#bf616a;">SMTP_SSL</span><span>=</span><span style="color:#a3be8c;">true
</span><span style="color:#bf616a;">SMTP_ENABLE_STARTTLS_AUTO</span><span>=</span><span style="color:#a3be8c;">false
</span><span style="color:#bf616a;">SMTP_AUTH_METHOD</span><span>=</span><span style="color:#a3be8c;">plain
</span><span style="color:#bf616a;">SMTP_OPENSSL_VERIFY_MODE</span><span>=</span><span style="color:#a3be8c;">none
</span><span style="color:#bf616a;">SMTP_DELIVERY_METHOD</span><span>=</span><span style="color:#a3be8c;">smtp
</span></code></pre>
<p>This particular configuration could be used with other mail servers that
send emails using port 465 using SSL. Most of mail servers I am used to
work specifically like this, so I am not entirely sure why I could not find
this anywhere else, but maybe it is a localized problem.</p>
Install PHP7 with composer on Arch2021-07-11T00:00:00+00:002021-07-11T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/install-php7-with-composer-on-arch/<p>After adding support for PHP 8.0, Arch news reported they are
<a href="https://archlinux.org/news/php-80-and-php-7-legacy-packages-are-available/">keeping legacy PHP7 packages</a>
available back in January. I was quite out of PHP scene for now, but
recently, I had to test something.</p>
<p>The test obviously required the
<a href="https://archlinux.org/packages/extra/x86_64/php/">php</a> core package, now
the version 8.0 and a second hard requirement is a
<a href="https://getcomposer.org/">composer</a>, a PHP's package manager.</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -S</span><span> php composer
</span></code></pre>
<p>Trying to run <code>composer install</code> in the software's root folder shown the
following error:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>Your lock file does not contain a compatible set of packages. Please run composer update.
</span><span>
</span><span> Problem 1
</span><span> - Root composer.json requires php ^7.3.0 but your php version (8.0.7) does not satisfy that requirement.
</span></code></pre>
<p>The error means that, what I was testing was written for PHP7, but I had an
incompatible PHP8 already.</p>
<h2 id="php7-legacy-package">PHP7 legacy package</h2>
<p>I thought, no problem. Just install the legacy
<a href="https://archlinux.org/packages/extra/x86_64/php7/">php7</a> core package
mentioned in the news post and everything will be alright.</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -S</span><span> php7
</span></code></pre>
<p>I did not read the news post thoroughly so I was expecting that the pacman
will offer me to remove the <code>php</code> package when installing the <code>php7</code>
package, assuming they would be in conflict. This did not happen. The
<code>php7</code> package installed gracefully, living happily alongside the base
<code>php</code> package. Still not reading the post, I started poking around:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pkgfile --list --binaries</span><span> php7
</span></code></pre>
<p>Yeah, it is obvious now why there was no conflicting files, specifically
the <code>/usr/bin/php</code> binary:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>extra/php7 /usr/bin/phar7
</span><span>extra/php7 /usr/bin/phar7.phar
</span><span>extra/php7 /usr/bin/php-config7
</span><span>extra/php7 /usr/bin/php7
</span><span>extra/php7 /usr/bin/phpize7
</span></code></pre>
<p>The packages are built in such a way, they do not collide with each other.
The post even states that specifically:</p>
<blockquote>
<p>PHP 7 binaries and configuration have the "7" suffix:</p>
<ul>
<li>/usr/bin/php -> /usr/bin/php7</li>
<li>/etc/php -> /etc/php7</li>
<li>...</li>
</ul>
</blockquote>
<p>What to do now?</p>
<h2 id="solving-file-conflicts">Solving file conflicts</h2>
<p>The first obvious way to introduce the symlink, as I was not sure if I
could force all the software I was just trying and knew nothing about use
<code>php7</code> binary instead of just <code>php</code> it surely expected.</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> ln</span><span style="color:#bf616a;"> -s</span><span> /usr/bin/php7 /usr/bin/php
</span></code></pre>
<p>Not so fast, or in a terminal's own words:</p>
<p><code>ln: failed to create symbolic link '/usr/bin/php': File exists</code></p>
<p>So there were some solutions that I came up with that second:</p>
<ol>
<li>Keeping all packages, remove just the <code>/usr/bin/php</code> and do the symlink</li>
<li>Remove the <code>php</code> package and then do a symlink</li>
<li>Do some <code>PATH</code>
<a href="https://askubuntu.com/a/406281/350681">environment variable magic</a></li>
</ol>
<p>I did not consider the third option too vital for a short test that might
even involve different system users. Making this work by changing <code>PATH</code>
could also lead to some hard to explain errors and I had to make it run
first, experiment later, so I considered symlinks.</p>
<p>Even though I
<a href="/blog/how-not-create-node-executable-arm/#running-a-pre-compiled-nodejs-arm-x64-executable">do not like symlinking around the system files</a>,
I still did not find an absolutely best practice that would work out of the
box in every possible scenario, so the second option seemed like a lesser
of the two evils.</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -Rnc</span><span> php
</span></code></pre>
<p>No, this takes composer down with it as well.</p>
<h2 id="problematic-composer-dependency">Problematic composer dependency</h2>
<p>Trying to install <code>composer</code> back brings the <code>php</code> as a dependency, so
these two are unlikely to part ways. There are options, however:</p>
<ul>
<li>Removing the dependency from the composer's PKGBUILD</li>
<li>Install composer without dependencies</li>
</ul>
<p>Let's explore both.</p>
<h3 id="removing-the-dependency-from-the-pkgbuild">Removing the dependency from the PKGBUILD</h3>
<p>The <code>php</code> dependency is mentioned with keyword <code>depends=</code> in the
<a href="https://github.com/archlinux/svntogit-packages/blob/9d00df108ad949b40d4a9e247d0a379b5a46e48a/trunk/PKGBUILD#L10">PKGBUILD</a>.
We have to remove it from there, rebuild the package and install
everything. The full command could look like this:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -S</span><span> php7
</span><span style="color:#bf616a;">sudo</span><span> ln</span><span style="color:#bf616a;"> -s</span><span> /usr/bin/php7 /usr/bin/php
</span><span style="color:#bf616a;">yay -G</span><span> composer </span><span style="color:#65737e;"># download the PKGBUILD file only
</span><span style="color:#96b5b4;">cd</span><span> composer
</span><span style="color:#bf616a;">sed </span><span>"</span><span style="color:#a3be8c;">10s/'php'//</span><span>"</span><span style="color:#bf616a;"> -i</span><span> PKGBUILD
</span><span style="color:#bf616a;">makepkg -sri
</span><span style="color:#bf616a;">composer --version
</span></code></pre>
<p>Not entirely sure what happens during the upgrade, though. Looks very
messy, but works.</p>
<h3 id="installing-composer-without-dependencies">Installing composer without dependencies</h3>
<p>Pacman itself offers another option, but it's use is very discouraged, so
use only when accepting the risk, that your system might break due to this:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -Sdd</span><span> package_name
</span></code></pre>
<p>Using the <code>-dd</code> parameter while installing a package does not install it's
dependencies. The full set of commands would then look like this:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -S</span><span> php7
</span><span style="color:#bf616a;">sudo</span><span> ln</span><span style="color:#bf616a;"> -s</span><span> /usr/bin/php7 /usr/bin/php
</span><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -Sdd</span><span> composer
</span><span style="color:#bf616a;">composer --version
</span></code></pre>
<p>Even I still have to fully understand the risks involved of solving this
dependency problem like this, so use with caution.</p>
<h2 id="conclusion">Conclusion</h2>
<p>After installing a <code>php7</code> package and then <code>composer</code> without the <code>php</code>
dependency, which automatically pulled version 8.0, symlinking the binary,
the <code>composer install</code> in the software relying on PHP7 now works without
complains. Have to understand the long-term consequences of this actions,
but so far, so good. Definitely would love to see how different people
solve this.</p>
Transfer files between servers using rrsync2021-07-10T00:00:00+00:002021-07-10T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/transfer-files-between-servers-using-rrsync/<p>You might be thinking there is a typo in the <code>rrsync</code> but it is actually a
legitimate command name. The command name comes from restricted rsync and
is usually distributed alongside <code>rsync</code> via the package manager. Lets find
out where it is located on Arch Linux:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">$</span><span> pacman</span><span style="color:#bf616a;"> -Fy </span><span>&& </span><span style="color:#bf616a;">pacman -F</span><span> rrsync
</span><span style="color:#bf616a;">extra/rsync</span><span> 3.2.3-3 </span><span style="color:#b48ead;">[</span><span>installed</span><span style="color:#b48ead;">]
</span><span> </span><span style="color:#bf616a;">usr/lib/rsync/rrsync
</span></code></pre>
<p>It is clear that Arch ships <code>rrsync</code> as a part of the <code>rsync</code> package,
although its location is quite of a bummer as <code>/usr/lib/rsync/rrsync</code> is
not place where one would usually look for executable files and this
location is not usually in a user's <code>$PATH</code> variable, meaning that to run
it one must provide a full path to it. It is already correctly marked as
executable:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">$</span><span> ls</span><span style="color:#bf616a;"> -l</span><span> /usr/lib/rsync/rrsync
</span><span style="color:#bf616a;">-rwxr-xr-x</span><span> 1 root root 7467 Dec 30 2020 /usr/lib/rsync/rrsync
</span></code></pre>
<p>Yes, there is <code>x</code> at the end of the first column. Alternatively, <code>stat</code> can
be used as well:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">$</span><span> stat</span><span style="color:#bf616a;"> -c </span><span>'</span><span style="color:#a3be8c;">%A</span><span>' /usr/lib/rsync/rrsync
</span><span style="color:#bf616a;">-rwxr-xr-x
</span></code></pre>
<p>It's location is not a problem at all however, you won't be running it
manually. Trying to do so ends with error:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">$</span><span> /usr/lib/rsync/rrsync
</span><span style="color:#bf616a;">/usr/lib/rsync/rrsync:</span><span> No subdirectory specified
</span><span style="color:#bf616a;">Use </span><span>'</span><span style="color:#a3be8c;">command="/usr/lib/rsync/rrsync [-ro|-wo] SUBDIR"</span><span>'
</span><span> </span><span style="color:#bf616a;">in</span><span> front of lines in /home/user/.ssh/authorized_keys
</span></code></pre>
<p>The error is actually pretty helpful as it hints to exactly what has to be
done to make it work.</p>
<h2 id="generate-a-public-key">Generate a public key</h2>
<p>On the receiving machine, generate a SSH key pair:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">ssh-keygen -f ~</span><span>/.ssh/rrsync_transfer</span><span style="color:#bf616a;"> -C </span><span>"</span><span style="color:#a3be8c;">Transfer files between servers using rrsync</span><span>"
</span></code></pre>
<p>When prompted for a passphrase, do not insert one. This is crucial for an
automated setup. Now get the public key file located at
<code>~/.ssh/rrsync_transfer.pub</code> to the sourcing machine. This might be a
little tricky as there most likely isn't a direct connection between these
two servers/machines at this point, but there is usually a intermediate
local computer (the one you are working on right now) that can remotely
connect to both. Transferring the public key could look like this:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">scp</span><span> user@receiving:</span><span style="color:#bf616a;">~</span><span>/.ssh/rrsync_transfer.pub .
</span><span style="color:#bf616a;">scp</span><span> rrsync_transfer.pub user@sourcing:</span><span style="color:#bf616a;">~</span><span>/
</span></code></pre>
<p>There are many other ways to to this, even simple copy-paste from editor to
editor could be sufficient. In the end, the contents of the
<code>rrsync_transfer.pub</code> should be present at the sourcing machine.</p>
<h2 id="authorized-keys">Authorized keys</h2>
<p>The next step is to add the <code>rrsync</code> reference from above to the
<code>authorized_keys</code> file on the sourcing machine:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#96b5b4;">echo </span><span style="color:#bf616a;">-n </span><span>'</span><span style="color:#a3be8c;">command="/usr/lib/rsync/rrsync -ro ~/",no-agent-forwarding,no-port-forwarding,no-pty,no-user-rc,no-X11-forwarding </span><span>' >> /home/user/.ssh/authorized_keys
</span></code></pre>
<p>If the file does not exist and terminal complains about it, create it
first:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">touch</span><span> /home/user/.ssh/authorized_keys
</span></code></pre>
<p>The trailing whitespace is required there. Note that absolute path is used
here just to denote a <code>user</code>. If already logged in as one, a relative path
can of course be used as well. Proceed by appending the public key just
after this command:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span>< rrsync_transfer.pub >> /home/user/authorized_keys
</span></code></pre>
<p>For the users unfamiliar with the above syntax, the redirection operator is
used to avoid the so called <em>useless use of cat</em>. Anyway, in the end, the
<code>authorized_keys</code> file should have one of en the entries on the single line
that contains something like this:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">command</span><span>="</span><span style="color:#a3be8c;">/usr/lib/rsync/rrsync -ro ~/</span><span>"</span><span style="color:#a3be8c;">,no-agent-forwarding,no-port-forwarding,no-pty,no-user-rc,no-X11-forwarding </span><span style="color:#bf616a;">ssh-rsa</span><span> AAA...Vc= Transfer files between servers using rrsync
</span></code></pre>
<p>It should now be possible to <code>rsync</code> files to the receiving machine from
the sourcing one like this:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">rsync -e </span><span>"</span><span style="color:#a3be8c;">ssh -i </span><span>$</span><span style="color:#bf616a;">HOME</span><span style="color:#a3be8c;">/.ssh/rrsync_transfer</span><span>"</span><span style="color:#bf616a;"> -av</span><span> user@sourcing: transferred-files/
</span></code></pre>
<p>Note that this method only works for non-root environments. To get it to
work with root, for instance to do a periodic backup of a whole system (the
way I usually use it), there are a few more steps required.</p>
<h2 id="using-with-root">Using with root</h2>
<p>To be able to access entire filesystem located at <code>/</code>, first move the above
<code>command="... ssh-rsa AAA...</code> entry from user's <code>authorized_keys</code> file to a
one belonging to a root user. Please do not try to move the entire file,
unless you are absolutely sure it contains only the single entry discussed
above, otherwise, depending on the ssh configuration, you might decrease
the security of your system.</p>
<p>Now modify that line you just moved from the
<code>/home/user/.ssh/authorized_keys</code> to the <code>/root/.ssh/authorized_keys</code> and
change path from relative <code>~/</code> to the absolute <code>/</code> like this:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">command</span><span>="</span><span style="color:#a3be8c;">/usr/lib/rsync/rrsync -ro /</span><span>"</span><span style="color:#a3be8c;">,no-agent-forwarding,no-port-forwarding,no-pty,no-user-rc,no-X11-forwarding </span><span style="color:#bf616a;">ssh-rsa</span><span> AAA...Vc= Transfer files between servers using rrsync
</span></code></pre>
<p>The only difference is the missing <code>~</code> there, with <code>/</code> making whole
filesystem reachable. The final requirement is to modify
<code>/etc/ssh/sshd_config</code> file. Look for <code>PermitRootLogin</code>, uncomment it and
change it's value to:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>PermitRootLogin forced-commands-only
</span></code></pre>
<p>Here's where <code>rrsync</code> or restricted rsync shines. Even though it is
accessing root filesystem, it cannot be used to damage the system this way,
as it can only read the files. The command to do the backup of the entire
filesystem could then look like this:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> rsync</span><span style="color:#bf616a;"> -e </span><span>"</span><span style="color:#a3be8c;">ssh -i </span><span>$</span><span style="color:#bf616a;">HOME</span><span style="color:#a3be8c;">/.ssh/rrsync_transfer</span><span>"</span><span style="color:#bf616a;"> -aAXv --exclude</span><span>={"</span><span style="color:#a3be8c;">/dev/*</span><span>","</span><span style="color:#a3be8c;">/proc/*</span><span>","</span><span style="color:#a3be8c;">/sys/*</span><span>","</span><span style="color:#a3be8c;">/tmp/*</span><span>","</span><span style="color:#a3be8c;">/run/*</span><span>","</span><span style="color:#a3be8c;">/mnt/*</span><span>","</span><span style="color:#a3be8c;">/media/*</span><span>","</span><span style="color:#a3be8c;">/lost+found</span><span>"} root@sourcing: filesystem-backup/
</span></code></pre>
<p>Make sure to change <code>user@sourcing:</code> to <code>root@sourcing:</code>. The above command
could be set up as a cron job, too!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://www.samba.org/ftp/unpacked/rsync/support/rrsync">https://www.samba.org/ftp/unpacked/rsync/support/rrsync</a></li>
<li><a href="https://www.guyrutenberg.com/2014/01/14/restricting-ssh-access-to-rsync/">https://www.guyrutenberg.com/2014/01/14/restricting-ssh-access-to-rsync/</a></li>
<li><a href="http://biplane.com.au/blog/?p=591">http://biplane.com.au/blog/?p=591</a></li>
</ul>
Stripping EXIF metadata from photos2021-07-09T00:00:00+00:002021-07-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/stripping-exif-metadata-from-photos/<p>There is no shortage of privacy related issues on the Internet. One of them
I decided to tackle today is EXIF metadata embedded to the photos I publish
here. I do not publish photos too often currently, but occasionally I do.
Before today, I did not strip any EXIF metadata and this practice is
<a href="https://www.quora.com/Why-is-the-EXIF-metadata-so-dangerous-for-user-privacy">considered to be a potential privacy issue</a>
too.</p>
<h2 id="getting-the-right-tools">Getting the right tools</h2>
<p>The starting point for me was in
<a href="https://github.com/getzola/zola/issues/838#issuecomment-553720905">zola#838</a>
issue, mentioning <code>exiftran</code> and <code>exiv2</code> tools. Lets pick them up:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">$</span><span> pkgfile exiftran
</span><span style="color:#bf616a;">community/fbida
</span><span>
</span><span style="color:#bf616a;">$</span><span> pkgfile exiv2
</span><span style="color:#bf616a;">extra/exiv2
</span><span>
</span><span style="color:#bf616a;">$</span><span> sudo pacman</span><span style="color:#bf616a;"> -S --needed</span><span> fbida exiv2
</span></code></pre>
<p>This should be sufficient, adapt for a different package manager if needed.</p>
<h2 id="implementing-into-a-publishing-pipeline">Implementing into a publishing pipeline</h2>
<p>What I have come up with is this bash snippet:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">files</span><span>=$</span><span style="color:#a3be8c;">(</span><span style="color:#bf616a;">git</span><span style="color:#a3be8c;"> diff</span><span style="color:#bf616a;"> --cached --name-only </span><span>| </span><span style="color:#bf616a;">egrep -i </span><span>"</span><span style="color:#a3be8c;">\.(jpe?g|png|gif)$</span><span>"</span><span style="color:#a3be8c;">)
</span><span>
</span><span style="color:#96b5b4;">echo </span><span>"$</span><span style="color:#bf616a;">files</span><span>" | </span><span style="color:#bf616a;">xargs -I </span><span>% exiftran</span><span style="color:#bf616a;"> -i -a </span><span>%
</span><span style="color:#96b5b4;">echo </span><span>"$</span><span style="color:#bf616a;">files</span><span>" | </span><span style="color:#bf616a;">xargs -I </span><span>% exiv2 rm %
</span></code></pre>
<p>I know, it uses <code>xargs</code> on files. This is potentially dangerous, consider
taking a look at
<a href="https://stackoverflow.com/a/51305211/1972509">possible safe usage of xargs</a>
on files. Anyway, the danger is greatly mitigated by the fact that
<code>xargs -I</code> is only applied on files that end with common image extensions
and more importantly, only to such image files that <strong>were just added into
git index</strong>. Enjoy!</p>
Reverse proxy behind a reverse proxy2021-07-08T00:00:00+00:002021-07-08T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/reverse-proxy-behind-reverse-proxy/<p>Learning things the hard way by getting my hands dirty is also the way I
have learned about the concept of a reverse proxy behind another reverse
proxy. The setup I had was a docker-compose file that configured multiple
services including Nginx as a reverse proxy. Nothing special here, works on
many places, worked on my VPS of choice.</p>
<p>The problem of course started when I wanted to add another service to that
server to utilize its resources better. My first idea was to move the Nginx
service from that docker-compose file somewhere else. For the simplicity,
let's consider it would be a bare-metal Nginx configured as a reverse
proxy. In theory it could work fine - just convert the Nginx configuration
file shipped with the docker-compose to the vhost file.</p>
<p>Sadly, after a bit of digging I have learned that this would not be so
simple. The biggest problem I have encountered was that separating the
Nginx service out of the docker-compose meant that it lost access to the
docker-compose network. Any upstream server defined in such Nginx
configuration would not be accessible outside of said network, certainly
not from the bare-metal Nginx server. At least not without additional
configuration. I have found this option to be quite error prone, especially
since the existing docker-compose file was working without any problem.
Don't fix it if ain't broke, they say. I agree. What is the other option?</p>
<h2 id="reverse-proxyception">Reverse-proxyception</h2>
<p>A containerized Nginx service as a reverse proxy <em>behind</em> a bare-metal
Nginx as a reverse proxy? My mind was not quite ready to accept such
configuration at the time the idea struck my mind. I though such
constellation to be convoluted and needlessly complex, not to mention the
added resource overhead. I started looking around if someone else is doing
such a horrendous thing too.</p>
<p>In a world where the possibilities of a single individual expand
exponentially every day, it is only inevitable that there is someone doing
a thing someone else would consider crazy. But it turns out
<a href="https://stackoverflow.com/questions/14148821/nginx-web-server-using-2-level-of-proxies">something similar is discussed here</a>
and the answer, albeit not boasting a very large traffic, still confirms a
technique employing multiple tiers of a reverse proxies on the same server
is nothing new.</p>
<h2 id="it-works">It works</h2>
<p>Infused with newly found courage, I went on putting the idea to work. I got
surprised I had it running flawlessly in under ten minutes, give or take.
Everything is impossible, until finished. Hopefully I learn soon enough
that I made a good design decision.</p>
Done spell checking on my blog2021-07-07T00:00:00+00:002021-07-07T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/done-spellchecking-on-my-blog/<p>I have put a considerable effort today to fix a lot of small errors on the
blog that accumulated over time, either by the fact that the blog was
converted from Sapper to Zola, which naturally processes some slight
details of the Markdown differently or due to me mistyping some words.
Let's break down the most important changes.</p>
<h2 id="dropped-the-footnotes-entirely">Dropped the footnotes entirely</h2>
<p>I was experimenting with the
<a href="https://www.markdownguide.org/extended-syntax/#footnotes">Markdown footnotes</a>.
While they were mostly supported without a problem previously, their
<a href="https://github.com/getzola/zola/issues?q=is%3Aissue+footnote">support in Zola</a>
is not perfect. It is not a problem, however. I was using them mostly only
for the hyperlinks anyway, and this is completely covered in anything web
related, Markdown included. I have found no other blog I like to use
footnotes extensively, and the usage greatly reduces portability.</p>
<p>There was the article about
<a href="/blog/make-ssh-prompt-password-keepassxc/">prompting a SSH unlock with KeePassXC</a>
that used the footnotes for the actual blocks of text. Here, the footnotes
would be perfectly valid, if they were part of some ebook. Since Zola did
not render them, I just moved them into the parenthesis.</p>
<h2 id="fixed-a-lot-of-tags">Fixed a lot of tags</h2>
<p>Hurling post a day while still doing other tasks had taken it's toll on the
quality and the quantity of the tags used on the posts. This rewamp focused
on missing or wrong tags. They are now also available at <a href="/tags">Tags</a>. If
I now only found a way to reverse their order, and possibly stylize them.
At least they are accessible.</p>
<h2 id="fixed-code-block-overflowing">Fixed code block overflowing</h2>
<p>The code was overflowing to the right in code blocks. If the theme
highlighted the text inside wit ha bright color, it was almost invisible on
the similarly colored background. The fix was in SCSS file:</p>
<pre data-lang="css" style="background-color:#2b303b;color:#c0c5ce;" class="language-css "><code class="language-css" data-lang="css"><span style="color:#bf616a;">pre code </span><span>{
</span><span> white-space: pre-wrap;
</span><span>}
</span></code></pre>
<p>The above snippet is just the bare minimum, but this was all it took. I am
still not sure if it is compatible across most browsers and if it even is
the right way to do it, but for now it serves the purpose, the code now
fits the bounding code block and it is easy to copy and almost as easy to
read, albeit sometimes it is not entirely obvious what is a newline and
what is just a wrap.</p>
<h2 id="spell-check-every-post">Spell check every post</h2>
<p>This is the bulk of the work that gone into a massive commit with 123 files
changed, 1043 insertions(+) and 807 deletions(-) that looks like this:</p>
<p><img src="https://peterbabic.dev/blog/done-spellchecking-on-my-blog/fix-typos-git-stat.png" alt="The git diff --stat output of the typos commit." /></p>
<p>Gitea even refused to render in it's entirety due to sheer amount of files
changed. I was not focused on doing the spell checking, at first, but since
I was going file by file, changing a tag here, converting a footnote there,
I decided to combine the tasks.</p>
<h3 id="automated-work-first">Automated work first</h3>
<p>First, I used the <a href="https://github.com/crate-ci/typos">typos</a> tool, to spell
check the bulk of the typos.</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">typos -w
</span></code></pre>
<p>This simplified the manual work I had to do quite a lot and it is quite
precise. The problem I found with it is that it tries to fix the URL's as
well. If the perfectly valid URL endpoint contains a typo, it fixes it and
then the URL probably becomes 404. This is something that should be kept in
mind before use. I solved it with setting the right entries under
<code>[default.extend-words]</code> in <code>_typos.toml</code>.</p>
<h3 id="manual-work-second">Manual work second</h3>
<p>Even though typos is quite a powerful tool, it fixed just a fraction of the
typos I had made during my year of blogging. I used the
<a href="http://vimdoc.sourceforge.net/htmldoc/spell.html">vim spell</a> but the
default English dictionary did not have a lot of words, especially the
names, so I had to add them to the local dictionary. I wanted to make the
blog more portable, so I opted-in for a project-wide dictionary using a
<a href="https://github.com/dbmrq/vim-dialect"><code>vim-dialect</code></a> plugin. It was not
updated for over 4 years at the time of writing, but anyway served the
purpose wonderfully.</p>
<p>A whole manual spell checking procedure took around 7 hours of typing, but
now all the posts contain far less typos, wrong tags, incorrectly rendered
footnotes or overflowing code blocks, so it was well worth the effort. Stay
tuned for more.</p>
Add archive into Zola2021-07-06T00:00:00+00:002021-07-06T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/add-archive-into-zola/<p>When learning new things, it always go slowly at the beginning, speeding
things up as the learning curve progresses. In the end, every other tool in
the developer's toolset helps in the long run. Today I dug deeper into
another template engine called <a href="https://tera.netlify.app/">Tera</a>.</p>
<p>Tera is a template engine for Rust based on
<a href="https://jinja.palletsprojects.com/">jinja</a> which was made for Python
instead. I was writing about jinja a few months ago, mostly under tag
<a href="/tags/ansible">ansible</a>, as ansible uses jinja as it's template engine.
Jinja and Tera are pretty similar, although Tera is not marketed to be
fully feature-complete with with predecessor, jinja. Anyway, for my goal to
get into Rust eventually, learning Tera seems like a worthwhile thing to
do. For now, I still did not find any stopper as a go-to SSG tool for my
blog, and it looks like some people are starting to notice my work, which
are all more compelling reasons to continue.</p>
<h2 id="what-is-an-archive">What is an archive</h2>
<p>An Archive is a list of all posts, usually grouped by some time period, for
instance a year or a month. Such list is pretty important in my opinion as
it allows me to gauge the work I did easily, not to mention simpler visual
searching. Unfortunately, I did not know how to search for a guide
implementing such feature, as even though it might be pretty obvious.</p>
<p>What I do usually is to browse repository issues for keywords. Since I did
not know I am looking for an <code>archive</code> keyword, the best thing I could
found was in
<a href="https://github.com/getzola/zola/issues/435#issuecomment-869210295">#435</a>.
I did not find a solution there. After a few days of silence I actually
stumbled upon the
<a href="https://www.getzola.org/documentation/templates/archive/">archive Zola docs</a>
that got me moving again. The most important part of the code is this:</p>
<pre data-lang="jinja" style="background-color:#2b303b;color:#c0c5ce;" class="language-jinja "><code class="language-jinja" data-lang="jinja"><span>{% </span><span style="color:#b48ead;">for </span><span style="color:#bf616a;">year</span><span>, </span><span style="color:#bf616a;">posts </span><span style="color:#b48ead;">in </span><span style="color:#bf616a;">section</span><span>.</span><span style="color:#bf616a;">pages </span><span>| </span><span style="color:#bf616a;">group_by</span><span>(</span><span style="color:#bf616a;">attribute</span><span>="</span><span style="color:#a3be8c;">year</span><span>") %}
</span><span>...
</span><span>{% </span><span style="color:#b48ead;">endfor </span><span>%}
</span></code></pre>
<p>It wasn't a ready to use code snippet as there were some bits missing, but
at least I knew what to look for.</p>
<h2 id="getting-the-right-pages">Getting the right pages</h2>
<p>Since my main <code>_index.md</code> was already serving paginated results I had to
find another way to get all the posts to create a list of them. I have
found a next clue in the
<a href="https://github.com/getzola/zola/issues/628#issuecomment-468286426">#628</a>
in the form of <code>get_section()</code>, specifically:</p>
<pre data-lang="jinja" style="background-color:#2b303b;color:#c0c5ce;" class="language-jinja "><code class="language-jinja" data-lang="jinja"><span>{% </span><span style="color:#b48ead;">set </span><span style="color:#bf616a;">s </span><span>= </span><span style="color:#bf616a;">get_section</span><span>(</span><span style="color:#bf616a;">path</span><span>="</span><span style="color:#a3be8c;">posts/_index.md</span><span>") %}
</span></code></pre>
<p>Finally I got a way to get all the pages to do something with it.</p>
<h2 id="sorting-the-results">Sorting the results</h2>
<p>The pages I got this way however were out of order. Setting a
<code>sort_by = "date"</code> in the <code>archive/_index.md</code> had no effect, which I did
not expect. I was able to sort them using Tera filters, specifically via
<code>sort(attribute="date")</code>. The full working <code>templates/archive.html</code>
template looks like this:</p>
<pre data-lang="jinja" style="background-color:#2b303b;color:#c0c5ce;" class="language-jinja "><code class="language-jinja" data-lang="jinja"><span>{% </span><span style="color:#b48ead;">extends </span><span>"</span><span style="color:#a3be8c;">index.html</span><span>" %}
</span><span>
</span><span>{% </span><span style="color:#b48ead;">block </span><span style="color:#bf616a;">content </span><span>%}
</span><span> {% </span><span style="color:#b48ead;">set </span><span style="color:#bf616a;">section </span><span>= </span><span style="color:#bf616a;">get_section</span><span>(</span><span style="color:#bf616a;">path</span><span>="</span><span style="color:#a3be8c;">blog/_index.md</span><span>") %}
</span><span> {% </span><span style="color:#b48ead;">for </span><span style="color:#bf616a;">year</span><span>, </span><span style="color:#bf616a;">posts </span><span style="color:#b48ead;">in </span><span style="color:#bf616a;">section</span><span>.</span><span style="color:#bf616a;">pages
</span><span> | </span><span style="color:#bf616a;">sort</span><span>(</span><span style="color:#bf616a;">attribute</span><span>="</span><span style="color:#a3be8c;">date</span><span>")
</span><span> | </span><span style="color:#bf616a;">reverse
</span><span> | </span><span style="color:#bf616a;">group_by</span><span>(</span><span style="color:#bf616a;">attribute</span><span>="</span><span style="color:#a3be8c;">year</span><span>") %}
</span><span> <div class="archive">
</span><span> <h2>{{ </span><span style="color:#bf616a;">year </span><span>}}</h2>
</span><span> <ul>
</span><span> {% </span><span style="color:#b48ead;">for </span><span style="color:#bf616a;">post </span><span style="color:#b48ead;">in </span><span style="color:#bf616a;">posts </span><span>%}
</span><span> <li>
</span><span> <time>{{ </span><span style="color:#bf616a;">post</span><span>.</span><span style="color:#bf616a;">date </span><span>| </span><span style="color:#bf616a;">date</span><span>(</span><span style="color:#bf616a;">format</span><span>="</span><span style="color:#a3be8c;">%d-%h</span><span>") }}</time>
</span><span> <a href="{{ </span><span style="color:#bf616a;">post</span><span>.</span><span style="color:#bf616a;">permalink </span><span>}}">{{ </span><span style="color:#bf616a;">post</span><span>.</span><span style="color:#bf616a;">title </span><span>}}</a>
</span><span> </li>
</span><span> {% </span><span style="color:#b48ead;">endfor </span><span>%}
</span><span> </ul>
</span><span> </div>
</span><span> {% </span><span style="color:#b48ead;">endfor </span><span>%}
</span><span>{% </span><span style="color:#b48ead;">endblock </span><span style="color:#bf616a;">content </span><span>%}
</span></code></pre>
<p>To actually see the results, the aforementioned <code>archive/_index.md</code> works
by referencing this temple:</p>
<pre data-lang="toml" style="background-color:#2b303b;color:#c0c5ce;" class="language-toml "><code class="language-toml" data-lang="toml"><span style="background-color:#bf616a;color:#2b303b;">+++</span><span>
</span><span style="color:#bf616a;">template </span><span>= "</span><span style="color:#a3be8c;">archive.html</span><span>"
</span><span style="background-color:#bf616a;color:#2b303b;">+++</span><span>
</span></code></pre>
<p>No other code is strictly required, although some
<a href="https://www.getzola.org/documentation/content/sass/">styling</a> definitely
helps. It is possible to check it under <a href="/archive">Archive</a>.</p>
OnlyOffice proved to be useful2021-07-02T00:00:00+00:002021-07-02T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/onlyoffice-proved-to-be-useful/<p>Since I have set up
<a href="/blog/install-nextcloud-onlyoffice-docker-compose/">OnlyOffice atop of NextCloud a few weeks ago</a>,
I have been testing it's features, functionality, stability and overall
performance and today I can say I am pretty pleased with what I have found.</p>
<p>The collaborative functions of spreadsheets works pretty well. It serves as
a drop-in replacement for the members of team that is used to a desktop
type spreadsheets application like Excel, but with collaboration. Sensing a
file as an attachment when anyone makes changes is tedious and there is no
version history and generally no easy way to solve branching conflicts.</p>
<p>I have not tested anything related to editing documents yet, because for
text, there is markdown on my Gitea server, but I could guess it could work
at least as good as the spreadsheets, given the fact that the document
editing is less complex of the two.</p>
<p>What motivated me today to write this post was the use of the slides
application of OnlyOffice. Especially it's ability to work with <code>.pptx</code>
files made with PowerPoint. Opening <code>.pptx</code> file renders the file pretty
close, if not exactly like the PowerPoint application. Saving the
presentation in <code>.pptx</code> makes no problems either - the other side using
PowerPoint repeatedly reported no problems.</p>
<p>What strikes me the most is how unknown among the general population this
product is. Everyone knows the main desktop contenders I have already
mentioned, then some people know G-Suite products, some other know
applications shipped with MacOS. Then even less people know
OpenOffice/LibreOffice (these tend to have more problems with the <code>.pptx</code>,
but their performance is still pretty impressive). Almost no one knows WPS
office, and somewhere at the tail of this list is the OnlyOffice. It is a
shame because it is very well made. Hopefully it gets more traction and
coverage.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://github.com/ONLYOFFICE/docker-onlyoffice-nextcloud">https://github.com/ONLYOFFICE/docker-onlyoffice-nextcloud</a></li>
</ul>
Folderize your post for SSG2021-06-29T00:00:00+00:002021-06-29T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/folderize-your-post-for-ssg/<p>Since I have converted my blog to Zola, I started using assets a little bit
more in the posts. I am still not sure if it is the right thing, but for
now it helps me convey information in addition to text.</p>
<p>To embed a photo into the post, there are at least two main options
available in Zola:
<a href="https://www.getzola.org/documentation/content/overview/#static-assets">static assets</a>
and
<a href="https://www.getzola.org/documentation/content/overview/#asset-colocation">asset colocation</a>.
Using static assets is more suitable for icons and logos, generally for
assets that are shared among multiple posts, so we explore the second
option, the asset colocation.</p>
<p>The colocation roughly means that the page and assets it requires are
located in the same directory tree. Usually, the post is a <code>.md</code> file
located somewhere near the top of the <code>content/</code> directory. Does it mean we
should place photos there? Wouldn't it be a mess, so many markdown files
and images together? Well this is the reason to make a dedicated
directories. The exact same page can exist in the two locations (but not
both). Either as a standalone file:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>content/is-emacs-better-than-vim.md
</span></code></pre>
<p>Or in the dedicated directory, as an <code>index.md</code>:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>content/is-emacs-better-than-vim/index.md
</span></code></pre>
<p>Colocating an asset simply means that the image is placed in the posts
dedicated folder alongside it and referenced from the post:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>content/is-emacs-better-than-vim/index.md
</span><span>content/is-emacs-better-than-vim/raging-flamewar-photo.jpg
</span></code></pre>
<p>There is more going on under the hood, but Zola docs have it covered. What
I want to focus on is the process of getting from the first option to the
second one. This happens to me, when I want to add the picture into the
existing older post when updating it, or when I start writing the post and
then I realize I need to add an asset to it. Basically, it happens quite
often.</p>
<p>What needs to be done then are these three steps in succession:</p>
<ol>
<li>Get the filename of the post without it's <code>.md</code> extension</li>
<li>Create a folder with the name of the file (if
<a href="https://www.getzola.org/documentation/content/page/#output-paths">output paths</a>
are used for slugs)</li>
<li>Move the post inside it as <code>index.md</code></li>
</ol>
<p>I got annoyed the first time I had to do this by hand, so I wrote this
script to do the "folderize" steps for me:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#65737e;">#!/bin/bash
</span><span>
</span><span style="color:#bf616a;">fullname</span><span>=$</span><span style="color:#a3be8c;">(</span><span style="color:#bf616a;">basename</span><span> -- "$</span><span style="color:#bf616a;">1</span><span>"</span><span style="color:#a3be8c;">)
</span><span style="color:#bf616a;">filename</span><span>="$</span><span style="color:#a3be8c;">{</span><span style="color:#bf616a;">fullname</span><span>%</span><span style="color:#a3be8c;">.</span><span>*</span><span style="color:#a3be8c;">}</span><span>"
</span><span>
</span><span style="color:#bf616a;">mkdir </span><span>"$</span><span style="color:#bf616a;">filename</span><span>"
</span><span style="color:#bf616a;">mv </span><span>"$</span><span style="color:#bf616a;">fullname</span><span>" "$</span><span style="color:#bf616a;">filename</span><span style="color:#a3be8c;">/index.md</span><span>"
</span></code></pre>
<p>Put this script into the <code>content/</code> where all your <code>.md</code> files are located
as <code>folderize.sh</code> and make it executable via <code>chmod +x folderize.sh</code>. It
can then be run as follows:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">./folderize</span><span> is-emacs-better-than-vim.md
</span></code></pre>
<p>There is one last thing that has to be done - making sure this script won't
get copied into the <code>public/</code> folder when running <code>zola build</code>. For this
purpose, the <code>ignored_content</code>
<a href="https://www.getzola.org/documentation/getting-started/configuration/">configuration option</a>
is available for <code>config.toml</code>:</p>
<pre data-lang="toml" style="background-color:#2b303b;color:#c0c5ce;" class="language-toml "><code class="language-toml" data-lang="toml"><span style="color:#bf616a;">ignored_content </span><span>= ["</span><span style="color:#a3be8c;">*.sh</span><span>"]
</span></code></pre>
<p>That's it!</p>
ModbusRTU with autoflow on TouchBerry 10 pt.42021-06-28T00:00:00+00:002021-06-28T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/modbusrtu-autoflow-touchberry-10-pt4/<p><strong>This post is a part of a series about the TouchBerry ModbusRTU autoflow
issue and how have I resolved it. Other posts of the series can be found
under the <a href="/tags/touchberry">touchberry</a> tag. Note that newer posts might
contain more recent information.</strong></p>
<p>Throughout the series I have been investigating the mysterious UPS shield
that can be ordered as a highest configuration of TouchBerry 10 from
IndustrialShields (links in the previous posts).</p>
<p>The documentation for this panel sadly does not contain anything about the
RS-485 interface, nor ModbusRTU that uses it. The first documentation was
available as a support email, specifically mentioning the pinout of the 40
pin Raspberry Pi 4 header, powering the panel:</p>
<table><thead><tr><th>Fn</th><th>Descr.</th><th>#</th><th>#</th><th>Descr.</th><th>Fn</th></tr></thead><tbody>
<tr><td></td><td>NC</td><td><strong>7</strong></td><td><strong>8</strong></td><td><strong>GPIO14</strong></td><td><strong>TXD</strong></td></tr>
<tr><td></td><td>NC</td><td><strong>9</strong></td><td><strong>10</strong></td><td><strong>GPIO15</strong></td><td><strong>RXD</strong></td></tr>
<tr><td><strong>RE</strong></td><td><strong>GPIO17</strong></td><td><strong>11</strong></td><td><strong>12</strong></td><td>NC</td><td></td></tr>
<tr><td><strong>DE</strong></td><td><strong>GPIO27</strong></td><td><strong>13</strong></td><td><strong>14</strong></td><td>GND</td><td></td></tr>
</tbody></table>
<p>What I have repeatedly found that this is in fact not the case, instead of
<strong>GPIO17</strong> driving the <strong>RE</strong> pin (pin number 2) of the obscure UTRS485G
part and <strong>GPIO27</strong> driving the <strong>DE</strong> pin (pin number 3), the <strong>GPIO17</strong>
does nothing (as far as I can tell) and <strong>GPIO27</strong> drives both <strong>RE</strong> and
<strong>DE</strong>.</p>
<p>When trying to send some command from the TouchBerry to a ModbusRTU device,
for instance by switching a coil using <code>mbpoll</code>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">mbpoll -a</span><span> 2</span><span style="color:#bf616a;"> -b</span><span> 9600</span><span style="color:#bf616a;"> -t</span><span> 0</span><span style="color:#bf616a;"> -P</span><span> none /dev/ttyS0 1
</span></code></pre>
<p>The following error can be seen, even though the device sets the coil.</p>
<p><code>Write discrete output (coil) failed: Connection timed out</code></p>
<p>The reason is obvious - the response from the device did not reach the Pi.
Here's a further proof of what's happening with the UTRS485G, having the
<strong>GPIO27</strong> pulled HIGH all the time:</p>
<p><img src="https://peterbabic.dev/blog/modbusrtu-autoflow-touchberry-10-pt4/scope-rx-nok.png" alt="The screenshot from the oscilloscope displaying the correct TX, A, B signals and A-B differential signal while missing the RX sinal when hooked on UTRS485G." /></p>
<p>The RX has no transmission whatsoever, because the UTRS485G awaits that
both pins 2 and 3 (RE and DE) would be pulled LOW after the transmission to
reverse the direction. It is of course achievable with the script from the
part 1 or similar, but it is a big hassle to deal with. I could later
confirm that the pins 2 and 3 on the IC are in fact correctly
interconnected on the PCB when doing the part replacement, as can be seen
in the combined picture below:</p>
<p><img src="https://peterbabic.dev/blog/modbusrtu-autoflow-touchberry-10-pt4/485-ic-replaced.png" alt="A series of images, from the left to right: the original UTRS485G IC, IC removed with the detail to the pins 2 and 3 confirming they are connected on the PCB, the new MAX13487E IC soldered in place" /></p>
<p>The replacement was a Great Success. The MAX13487E has an automatic
direction control, or autoflow for short - the feature I was hoping I could
achieve either by software (via RTS0 pin, but since it is on the <strong>GPIO17</strong>
and not <strong>GPIO27</strong>, it is impossible, not to mention it is a reverse
polarity anyway) or by some hardware change. Here's how the scope rendered
the test ModbusRTU communication of switching the coil after the IC was
replaced:</p>
<p><img src="https://peterbabic.dev/blog/modbusrtu-autoflow-touchberry-10-pt4/scope-rx-ok.png" alt="The screenshot from the oscilloscope displaying the correct TX, RX, A, B signals and A-B differential signal when hooked on MAX13487E." /></p>
<p>Now the RX has the response from the device. Both scope screenshots omit
the pins 2 and 3, but with the MAX13487E, they are constantly pulled HIGH.
To achieve that from the startup, I could put a pull-up resistor on the
<strong>GPIO27</strong> or I could probably also enable the pull-up resistor in the Pi
on that pin (did not try yet). I have however found another, maybe a little
bit more portable way
<a href="https://www.raspberrypi.org/forums/viewtopic.php?p=1117946&sid=9c376dc61518dd96e905a99e652f3c21#p1117946">here</a>.
In short, edit <code>/boot/config.txt</code> and add to the bottom:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>dtoverlay=gpio-poweroff,gpiopin=27,active_low
</span></code></pre>
<p>This pulls the <strong>GPIO27</strong> HIGH as soon as the TouchBerry boots, making
ModbusRTU calls in conjunction with MAX13487E a breeze, meaning the Pi also
gets the response from the device. With the above setting, there's no need
to manually change any pin state or function, whatsoever. Since the
<code>/boot/config.txt</code> is a standard edit entry point for many using Raspberry,
it is probably the best way to store this functionality.</p>
<h2 id="rs-485-termination">RS-485 termination</h2>
<p>Note that there is no 120R termination resistor on the side of the UPS
shield, so when hooked to the device that also does not have any, and there
is neither one present on the cable between the A and B lines, the response
from the <code>mbpoll</code> can look like this:</p>
<p><code>Write discrete output (coil) failed: Response not from requested slave</code></p>
<p>Inserting the aforementioned 120R between the A and B solves the problem
and the response now reports the following all the time:</p>
<p><code>Written 1 references.</code></p>
<p>Generally, the person making the RS-485 connection is responsible to
provide the proper bias and termination resistors, so this is in fact not
the problem with UPS shield on TouchBerry 10 per se, it is just a thing to
have in mind.</p>
Setting up an URL prefix in Zola2021-06-27T00:00:00+00:002021-06-27T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/setting-url-prefix-in-zola/<p>By adopting <a href="https://www.getzola.org/">zola</a> as a go-to Static Site
Generator (SSG) tool I was
<a href="/blog/converted-my-blog-to-zola/">successfully able to leave the Sapper behind</a>.
There was however quite a serious issue with the new blog setup that went
unnoticed for a few days.</p>
<p>Precisely, the problem was with the URL links. It is very
<a href="/blog/insights-google-search-console/#trailing-slash-inconsistency-with-spa">important to keep URLs the same</a>,
or at least to arrange a proper redirect. Other URL insights I made are
also available in
<a href="/blog/using-uuid-in-atom-feed/#url-vs-urn-which-one-to-choose">this post</a>.
The problem was that the new links looked like this:</p>
<p><code>https://peterbabic.dev/upgrading-wiringpio-raspberry-pi-4/</code></p>
<p>But the original links looked like this:</p>
<p><code>/blog/upgrading-wiringpio-raspberry-pi-4/</code></p>
<p>See? The URL was missing the <code>blog/</code> part segment, I decided to call
<em>prefix</em> here, for the lack of better word at hand. Word at hand? Whatever.</p>
<h2 id="the-symptom">The symptom</h2>
<p>I have actually found about the problem by accident, trying to paste some
links on the social media but I was getting a 404 on the links I got from
the browser's address bar. I knew they should be fine, as I must have had
visited them before, since they browser history had an entry.</p>
<p>At first I thought that maybe my server is down, but everything other there
was up and running. Then blog homepage was also running, and clicking the
links to individual posts was working, yet the address bar links were dead.
And then it clicked.</p>
<h2 id="adding-an-url-prefix">Adding an URL prefix</h2>
<p>Realizing that this is quite serious, I stopped what I was doing and
started figuring out how to fix this in Zola. I needed to add the <code>blog/</code>
segment there, but no actual configuration setting seemed to elude me.</p>
<p>Of course, there was no configuration, I had to actually move the files
from the <code>content/</code> directory to the <code>/content/blog</code> and then arrange for
the rest. After struggling for a bit, the solution came:</p>
<ul>
<li>Move the original <code>_index.md</code> now residing in <code>content/blog/_index.md</code>
one level up, back into now devoid <code>content/</code>:</li>
</ul>
<pre data-lang="toml" style="background-color:#2b303b;color:#c0c5ce;" class="language-toml "><code class="language-toml" data-lang="toml"><span style="background-color:#bf616a;color:#2b303b;">+++</span><span>
</span><span style="color:#bf616a;">sort_by </span><span>= "</span><span style="color:#a3be8c;">date</span><span>"
</span><span style="color:#bf616a;">paginate_by </span><span>= </span><span style="color:#d08770;">7
</span><span style="background-color:#bf616a;color:#2b303b;">+++</span><span>
</span></code></pre>
<ul>
<li>Create another <code>_index.md</code> in its original place at
<code>content/blog/_index.md</code>:</li>
</ul>
<pre data-lang="toml" style="background-color:#2b303b;color:#c0c5ce;" class="language-toml "><code class="language-toml" data-lang="toml"><span style="background-color:#bf616a;color:#2b303b;">+++</span><span>
</span><span style="color:#bf616a;">transparent </span><span>= </span><span style="color:#d08770;">true
</span><span style="color:#bf616a;">redirect_to </span><span>= "</span><span style="color:#a3be8c;">/</span><span>"
</span><span style="background-color:#bf616a;color:#2b303b;">+++</span><span>
</span></code></pre>
<p>Simple, right? Now all the links are as they were before the conversion
from Sapper to Zola. The <code>redirect</code> option is not entirely necessary, but
nice to have. The most important bit is the <code>transparent = true</code>. It
basically shifts the responsibility to the <code>_index.md</code> one level up.</p>
<p>There are links below that discuss what the <code>transparent</code> option does, as I
am still not quite that certain and for now I find the nomenclature chosen
(transparent, huh?) very confusing, so go read that to gain even better
insight. Happy writing.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://www.getzola.org/documentation/content/section/#front-matter">https://www.getzola.org/documentation/content/section/#front-matter</a></li>
<li><a href="https://www.xypnox.com/blag/posts/migrating-to-zola/">https://www.xypnox.com/blag/posts/migrating-to-zola/</a></li>
<li><a href="https://estada.ch/2021/3/28/blog-bugs-not-migrating-to-zola-for-now/">https://estada.ch/2021/3/28/blog-bugs-not-migrating-to-zola-for-now/</a></li>
<li><a href="https://zola.discourse.group/t/how-to-paginate-a-subdirectory/749">https://zola.discourse.group/t/how-to-paginate-a-subdirectory/749</a></li>
<li><a href="https://github.com/getzola/zola/issues/408">https://github.com/getzola/zola/issues/408</a></li>
<li><a href="https://github.com/getzola/zola/issues/1430">https://github.com/getzola/zola/issues/1430</a></li>
</ul>
RIGOL screenshots from terminal on Arch2021-06-26T00:00:00+00:002021-06-26T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/rigol-screenshots-from-terminal-arch/<p>I've used to store screenshots from the RIGOL DS1054Z oscilloscope on the
USB, which I then took out of the scope and inserted (on the third attempt
obviously) into the laptop, then copied the screenshot PNG files from the
USB to the target location. But it can be done much faster!</p>
<h2 id="what-is-lxi">What is LXI?</h2>
<p>From the <a href="https://github.com/lxi-tools/liblxi"><code>liblxi</code></a> repository:</p>
<blockquote>
<p>liblxi is an open source software library which offers a simple API for
communicating with LXI compatible instruments. The API allows
applications to discover instruments on your network, send SCPI commands,
and receive responses.</p>
<p>Currently the library supports VXI-11/TCP and RAW/TCP connections. Future
work include adding support for the newer and more efficient HiSlip
protocol which is used by next generation LXI instruments.</p>
</blockquote>
<p>In short, it is an open standard that allows TCP/IP communication with the
scope. Now <code>liblxi</code> is the library that can be implemented into software
that would interact with the scope. One such useful software is it's
<a href="https://github.com/lxi-tools/lxi-tools">lxi-tools</a> pack.</p>
<h2 id="lxi-tools-gui-on-arch">LXI tools GUI on Arch</h2>
<p>Yikes, it has a GUI! Nice! Not so fast. Although the
<a href="https://github.com/lxi-tools/lxi-tools/blob/master/images/lxi-gui-beta.png">GUI looks nice</a>,
it very
<a href="https://aur.archlinux.org/packages/lxi-tools-git/#comment-750463">hard to make run on Arch currently</a>,
details available for example in
<a href="https://github.com/lxi-tools/lxi-tools/issues/21">#21</a>. Worry not, it
still can be used effectively.</p>
<h2 id="lxi-tools-cli-to-the-rescue">LXI tools CLI to the rescue</h2>
<p>Although GUI is not easy to run, the CLI tool on the other hand runs
without problems:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">yay -S</span><span> lxi-tools-git
</span></code></pre>
<p>When installed, connect the
<a href="https://github.com/lxi-tools/lxi-tools#4-tested-instruments">LXI compatible instrument</a>
to your router over the LAN with an Ethernet cable and discover the device:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">lxi</span><span> discover
</span></code></pre>
<p>The output can look similar to this:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>Broadcasting on interface enp0s31f6
</span><span> Found "RIGOL TECHNOLOGIES,DS1104Z,XXX,00.0X.0X.SPX" on address 192.168.1.118
</span></code></pre>
<p>Note the IP address of the device in question. The actual log is a little
bit longer and can be much longer with more LXI compatible devices located
on the same network, so maybe some grepping could come handy.</p>
<h2 id="taking-screenshots-with-lxi">Taking screenshots with lxi</h2>
<p>Now with the IP address of the device obtained via DHCP known, taking
screenshots is a piece of a cake:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">lxi</span><span> screenshot</span><span style="color:#bf616a;"> -a</span><span> 192.168.1.118
</span></code></pre>
<p>They are stored into your <code>$HOME</code> folder, or <code>~/</code> in short. The actual path
on my device looks like this:</p>
<p><code>/home/peterbabic/screenshot_192.168.1.118_2021-06-26_18:13:19.png</code></p>
<p>Here's an example of a screenshot I made with this technique:</p>
<p><img src="https://peterbabic.dev/blog/rigol-screenshots-from-terminal-arch/rigol-screenshot.png" alt="A screenshot taken from the RIGOL oscilloscope via LXI interface. It displays a digital communication on four channels." /></p>
<p>Happy probing!</p>
I finished the 100daystooffload challenge!2021-06-25T00:00:00+00:002021-06-25T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/finished-100daystooffload-challenge/<p>This post marks a final piece in the
<a href="/tags/100daystooffload">#100daystooffload</a> challenge I have been
participating in for a past three and a half months. Let's check again what
the challenge guidelines are:</p>
<ul>
<li>This needs to be done on a personal blog, not a corporate blog. If you
don’t have a personal blog, you can sign up for a free one at Write.as</li>
<li>There is no specific start date. Your 100 posts can start or end whenever
you want them to.</li>
<li>Publish 100 new posts in the space of a year.</li>
<li>There are no limits to what you can post about – write about whatever
interests you.</li>
<li>Once you have published an article, don’t forget to post a link on your
social media with the hashtag #100DaysToOffload.</li>
<li>Get your friends involved!</li>
</ul>
<p>The challenge gives room of one full year to be completed. Some folks do
complete it in 100 days, in an act, that can be considered an ultimate
achievement of this challenge by some, although not a part of any official
guidelines.</p>
<h2 id="why-no-achievement">Why no achievement?</h2>
<p>I was also aiming for the goal of completing the challenge in 100 days by
writing one post a day, but then the holiday struck. My thoughts
<a href="/blog/holiday-break-for-week/">before I left</a> and then later
<a href="/blog/feelings-about-writing-break/">when I have returned</a> are summarized
in the posts as well. So what are my numbers?</p>
<ul>
<li>From and including: <strong>Thursday, March 11, 2021</strong></li>
<li>To and including: <strong>Friday, June 25, 2021</strong></li>
<li><strong>Result: 107 days</strong></li>
<li>It is 107 days from the start date to the end date, end date included.</li>
<li>Or 3 months, 15 days including the end date.</li>
</ul>
<p>107 days, still pretty nice, if you ask me. The holiday was definitely
deserved, so no problems here. I feel pretty good about the result.</p>
<h2 id="will-i-stop-writing-now">Will I stop writing now?</h2>
<p>No.</p>
<p>This is a final post of <a href="https://100daystooffload.com">#100daystooffload</a>!</p>
Vim increment in git rebase2021-06-24T00:00:00+00:002021-06-24T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/vim-increment-decrement-git-rebase/<p>Accidentally stumbled upon a hidden vim feature. The normal mode key
shortcuts responsible for increasing the number, <strong>Ctrl+a</strong> (increment) and
for decreasing the number following the cursor, <strong>Ctrl+x</strong> (decrement) do
something different during the interactive rebase editor. Watch for
yourself:</p>
<p><img src="https://peterbabic.dev/blog/vim-increment-decrement-git-rebase/vim-increment-decrement-git-rebase.gif" alt="Using Ctrl+a as increment and Ctrl+x as decrement in vim during interactive rebase cycles through rebase actions" /></p>
<p>Instead of affecting the numbers it affects the rebase command under the
cursor. Specifically, it rotates the following rebase commands in the
normal and opposite direction respectively:</p>
<p><code>pick</code>, <code>edit</code>, <code>fixup</code>, <code>squash</code>, <code>reword</code>, <code>drop</code></p>
<p>I have no git plug-ins enabled, but will yet have to confirm the exact
scenario when this feature is enabled or not. Not sure if terribly handy,
as it is quite easy to just do <code>jjciwr</code> for rewording the third commit for
instance, but still interesting.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://vim.fandom.com/wiki/Increasing_or_decreasing_numbers">https://vim.fandom.com/wiki/Increasing_or_decreasing_numbers</a></li>
<li><a href="https://git-scm.com/docs/git-rebase#_interactive_mode">https://git-scm.com/docs/git-rebase#_interactive_mode</a></li>
</ul>
<p>This is a 99th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
A cheap 40-pin flat cable fail2021-06-23T00:00:00+00:002021-06-23T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/cheap-flat-40-pin-cable-fail/<p>I have bought a handful of
<a href="https://www.tme.eu/en/details/fcs-c1-40-g/idc-connectors/adam-tech/">40 pin IDC connectors</a>
for a flat cable. I wanted to use them on the cheap, colorful, no-datasheet
40-pin flat cable bought a long ago from Ebay that I had lying around. The
idea was to use it to extend the Raspberry 40-pin header with a cable to
tinker on a shield that has it's interesting parts on the bottom -
sandwiched between itself and the Pi. The result was not pleasant:</p>
<p><img src="https://peterbabic.dev/blog/cheap-flat-40-pin-cable-fail/40-pin-cable-fail.png" alt="The 40-pin IDC cable is shorter than the connector meant for it, a fail of cheap Ebay products." /></p>
<p>So I went to the local store (heat wave still ongoing) and bought a proper
AWG28 40-pin cable. Now the connector fits the cable:</p>
<p><img src="https://peterbabic.dev/blog/cheap-flat-40-pin-cable-fail/40-pin-cable-success.png" alt="The 40-pin IDC connector fits the proper AWG28 cable." /></p>
<p>The sad part is that I had zillions of such cables from the old IDE HDDs.
Yet stock of such things suffers greatly when moving. Now I just needed a
single one and had to go through all these unnecessary trouble to obtain
one to get it to work. Hopefully I am not missing something terribly
obvious and it will in fact work for the Raspberry. Stay tuned.</p>
<p>This is a 98th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Resistors on the DIN rail2021-06-22T00:00:00+00:002021-06-22T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/resistors-on-din-rail/<p>Continuing my, now semi-regular work on the cabinet wiring, I have found
myself in a curious situation. The datasheet for the product contained the
following schematic:</p>
<p><img src="https://peterbabic.dev/blog/resistors-on-din-rail/inputs-note.png" alt="The wiring schematic for the inputs in the common cathode wiring focusing on current-limiting series resistors." /></p>
<p>The schematic demonstrates the recommended wiring between the controller,
in this instance a PLC and the device itself. I will refer to it as
<em>device</em>, but a keen eye can also catch the input labels, recognizing the
device to actually be a stepper driver. I have started writing about
stepper motors <a href="/blog/understanding-pulse-ouputs-mduino-38ar-plus/">here</a>
and then more recently <a href="/blog/stepper-motors-2-phase-3-phase/">here</a>, but
this post is more general and can be translated to other devices with
inputs as well.</p>
<p>The schematic specifically discusses the device inputs in a common cathode
wiring configuration, in the industrial automation jargon known more
loosely simply as "PNP". Note that there is nothing wrong with the
schematic or the wiring itself.</p>
<h2 id="resistors-in-a-wiring-cabinet">Resistors in a wiring cabinet</h2>
<p>Most things related to wiring cabinets and PLCs tend to work around the
nominal arbitrary value of 24 VDC. The 24VDC applies the supply voltage as
well as to discrete signals. From the schematic above, it its clear
however, that the device was designed to work with the 5V signals as well.
These are historically common for the TTL logic level electronics.</p>
<p>The description under the schematic shows, that the internal
current-limiting resistor for the optocouplers inside the device is
sufficiently high for the 5V inputs. Yet here's the catch. The internal
resistors might not be sufficient when the signal levels are higher. For
the significantly higher voltages, 24 V specifically, the description
recommends an additional 3K series resistor, otherwise the optocoupler's
diode could burn, permanently damaging the input.</p>
<h2 id="the-value-of-internal-resistor">The value of internal resistor</h2>
<p>Without any measurements, lets calculate the most probable value of the
used internal resistor. Keep in mind that these values are ballpark values.
Most common optocouplers operate safely with the diode current of 2.5mA to
around 25mA and are able to safely sustain short peaks of around 50mA, give
or take.</p>
<p>Let's assume around 10mA flows through the diode with the 5V input signal.
What is the internal resistor value?</p>
<p><code>5 V / 0.010 A = 500 ohm</code></p>
<p>Calculating the actual resistor value range for the 5V signals would bring
us to the following table, adjusted for the
<a href="https://en.wikipedia.org/wiki/E_series_of_preferred_numbers">resistor standard values</a>:</p>
<table><thead><tr><th>ohms</th><th>current</th><th>description</th></tr></thead><tbody>
<tr><td>220</td><td>22.73 mA</td><td>minimal resistor</td></tr>
<tr><td>470</td><td>10.64 mA</td><td>safe value</td></tr>
<tr><td>1k</td><td>5.00 mA</td><td>3.3V included</td></tr>
<tr><td>2k2</td><td>2.78 mA</td><td>maximal resistor</td></tr>
</tbody></table>
<p>If the device is capable of using a 3.3 V signals, although the datasheet
does not mention it, the maximal internal resistor value is more limited:</p>
<p><code>3.3 V / 0.0025 A = 1320 ohm</code></p>
<p>Standard value available is 1k2 but 1k is far more likely. With the above
in mind, it is probably safe to assume that the internal resistor's values
is somewhere in the range from 220 ohms to 1k, possibly, but not likely up
to 2k2.</p>
<h2 id="24-v-signals">24 V signals</h2>
<p>Let's calculate what would happen when the external resistor is using when
24 V signals are applied:</p>
<table><thead><tr><th>ohms</th><th>current</th><th>description</th></tr></thead><tbody>
<tr><td>220</td><td>109.09 mA</td><td>diode fried</td></tr>
<tr><td>470</td><td>51.06 mA</td><td>diode in danger</td></tr>
<tr><td>1k</td><td>24.00 mA</td><td>diode reaching its limits</td></tr>
<tr><td>2k2</td><td>10.91 mA</td><td>diode safe</td></tr>
</tbody></table>
<p>It is very likely that the sane designer had chosen a value of 1k for the
internal resistor, as it makes it safe to use for the voltages from 3.3 V
to a little above 24 V and everything in between, including 5 V and 12 V.
So why is the 3k external resistor recommended for 24 V signals?</p>
<h2 id="the-3k-external-resistor-mystery">The 3k external resistor mystery</h2>
<p>Recalculating the above table for the 24 V signals with the external 3k
resistor we get these values:</p>
<table><thead><tr><th>internal R</th><th>external R</th><th>ohms</th><th>current</th></tr></thead><tbody>
<tr><td>220</td><td>3k</td><td>3220</td><td>7.45 mA</td></tr>
<tr><td>470</td><td>3k</td><td>3470</td><td>6.91 mA</td></tr>
<tr><td><strong>1k</strong></td><td><strong>3k</strong></td><td><strong>4k</strong></td><td><strong>6.00 mA</strong></td></tr>
<tr><td>2k2</td><td>3k</td><td>5k2</td><td>4.62 mA</td></tr>
</tbody></table>
<p>No matter the internal resistor resistance, all the current values are well
within the safe range.</p>
<h2 id="we-have-the-numbers-now-what">We have the numbers, now what?</h2>
<p>The numbers are nice, but there is more practical problem on hand. Where
does one put a humble resistor in the wiring cabinet? There are also others
<a href="https://electronics.stackexchange.com/questions/417937/best-practice-for-adding-resistor-to-plc-cabinet">asking this very question</a>.
The best solution I have found for making the cabinet documentation a
breeze, while simultaneously making that same cabinet reproducible and
repairable by others is to use WAGO 289-114 with 288-001 and 288-002. To
save you from searching, it looks like this:</p>
<p><img src="https://peterbabic.dev/blog/resistors-on-din-rail/wago.png" alt="A pre-made component for a DIN rail containing 8 resistors and terminals for them." /></p>
<p>These three combined can be mounted on the DIN rail, which is almost a must
for most wiring cabinets I have encountered. There are 8x resistors are the
terminals to insert cables. The WAGO 236 terminals require a special lever
tool to insert the wires, but it can be done with a flat screwdriver if
really desperate, although it will leave visible marks on the terminals.</p>
<p>There is only single complain - the 289-114 contains resistors with the
nominal value of 2k7, not the 3k recommended for our device. Will it make a
difference? You probably know, but for the sake of completeness, here's the
calculation:</p>
<table><thead><tr><th>internal R</th><th>external R</th><th>ohms</th><th>current</th></tr></thead><tbody>
<tr><td>220</td><td>2k7</td><td>2920</td><td>8.22 mA</td></tr>
<tr><td>470</td><td>2k7</td><td>3170</td><td>7.57 mA</td></tr>
<tr><td><strong>1k</strong></td><td><strong>2k7</strong></td><td><strong>3k7</strong></td><td><strong>6.49 mA</strong></td></tr>
<tr><td>2k2</td><td>2k7</td><td>4k9</td><td>4.90 mA</td></tr>
</tbody></table>
<p>So will it work with 2k7 instead of 3k external resistors using 24 V
signals? Absolutely.</p>
<p>This is a 97th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
I converted my blog to zola!2021-06-21T00:00:00+00:002021-06-21T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/converted-my-blog-to-zola/<p>Today, precisely one year ago, was the day was I last time in my previous
job. For this anniversary I managed to convert my blog entirely from the
Sapper based on Svelte to <a href="https://www.getzola.org/">zola</a>. I have written
about zola a little bit in
<a href="/blog/one-disadvantage-git-based-blog/">this post</a>.</p>
<p>Everything I touch on zola surprises me in a positive way. There is still a
lot of work but since I have freed finally myself from the svelte-kit
related problems I have also mentioned in numerous previous posts, I could
now focus on polishing the blog without the risk of being stuck. Hope you
will enjoy it at least as much as I do!</p>
<p>This is a 96th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
ModbusRTU for TouchBerry 10 pt.32021-06-20T00:00:00+00:002021-06-20T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/modbusrtu-for-touchberry-10-pt3/<p><strong>This post is a part of a series about the TouchBerry ModbusRTU autoflow
issue and how have I resolved it. Other posts of the series can be found
under the <a href="/tags/touchberry">touchberry</a> tag. Note that newer posts might
contain more recent information.</strong></p>
<p>I've spent past few days fighting two things: heat wave and a no-autoflow
TouchBerry 10 - <a href="/blog/no-autoflow-rs485-touchberry-10/">part 1</a> and
<a href="/blog/modbusrtu-on-touchberry-10-pt2/">part 2</a>. Today, after a slew of
experiments and a few read forums and articles later, I decided to order
the
<a href="https://www.maximintegrated.com/en/products/interface/transceivers/MAX13487E.html">MAX12487</a> -
a half-duplex RS485 IC with the autoflow baked in, to replace the
<a href="http://www.unisonic.com.tw/datasheet/UTRS485.pdf">UTRS485G</a> that requires
manual direction control, making most MdobusRTU hard utilities and
libraries hard/impossible to use.</p>
<h2 id="problems-with-rts">Problems with RTS</h2>
<p>The GPIO17 can be configured into RTS mode via ALT3 function (RTS0 to be
specific). No matter what I tried, I could not make RTS pin to do anything
on the scope automatically during the transmission. Many sources point to
the fact, that this feat is complicated. The sources that claim how to do
it, sometimes do not specify the Raspberry Pi model or they omit if they
use <code>/dev/ttyS0</code> called mini-uart or if they use the serial under
<code>/dev/ttyAMA0</code>. Some kind of dtoverlay work is probably necessary. Even if
I could make the RTS work, there are still problems with this proposal.</p>
<p>The first problem is that as I have already outlined in previous articles,
the RE pin on the UTRS485G is supposedly connected to the GPIO17 on the
TouchBerry 10 and we actually need the DE pin, that is connected to GPIO27.
Inspecting the UTRS485G IC itself on the UPS shield PCB however shows that
RE and DE pins are clearly connected together, as they should be, creating
doubts about the support I have gotten from IndustrialShields. So even if
the RTS worked, it would be automatically switching the wrong pin. Pulling
GPIO27 HIGH makes transmission seamless. Most definitely this pin itsefs is
controlling both DE and RE on the UTRS485G. To be sure, I have ordered
SV-SOIC8 test clip along with some IDC 40 clips to extend the Raspberry 40
pin header via cables, as the UTRS485G is on the bottom of the UPS shield,
not reachable via clips when the shield is actually inserted into
Raspberry. I was considering the Pomona POM-5250 which is quite a
well-known clip piece but my local dealer had it twice the price of the
former one. Hopefully this decision turn out fine. It is better to have at
least some SO8 clips, than not to have any when an
<a href="/blog/how-use-flashrom-archlinux-arm/">urgent EEPROM flash is required</a>.</p>
<p>The second problem is as bad, maybe worse as the first. As I have later
found out, the
<a href="https://www.raspberrypi.org/forums/viewtopic.php?t=257816#p1572327">RTS pin has an inverted polarity</a>,
meaning it stays HIGH and goes LOW during the transmission. Fail. Close to
almost impossible to change just in software itself.</p>
<h2 id="reasons-for-choosing-max13487e">Reasons for choosing MAX13487E</h2>
<p>There are numerous people around claiming the MAX13487E is a good choice
for ModbusRTU applications. There is a slight distinction in pin
functionality and have made a deeper analysis in the
<a href="/blog/modbusrtu-on-touchberry-10-pt2/">part 2</a>, but they are generally
pin-to-pin compatible with most other RS485 half-duplex ICs in SO8 package,
including UTRS485G. The biggest problem when replacing components like
these is the voltage levels, but again, these two parts are both 5V, so no
problems. The conversion to 3.3V for Raspberry UART levels is already done
on the UPS board.</p>
<p>The choice of UTRS485G is strange. For instance, there is a review comment
for
<a href="https://www.sparkfun.com/products/10124">SparkFun Transceiver Breakout - RS-485</a>
which uses another obscure part, the SP3485 IC. User
<a href="https://www.sparkfun.com/users/499376">#499376</a> then states:</p>
<blockquote>
<p>I would not have purchased this board. Why? Because the obscure SP3485
chip that it uses does not have automatic flow control and is therefore
not compatible with many popular open source libraries for RS485 and in
particular Modbus RTU.</p>
</blockquote>
<blockquote>
<p>Unfortunately there don't seem to be a lot of other choices except
certain parts in online auctions and building one's own board with
MAX488, MAX13487E or another flow-control transceiver.</p>
</blockquote>
<p>This user basically confirms what I have learned the hard way here. Then
there are people claiming
<a href="https://electronics.stackexchange.com/questions/477777/noisy-rs485-signal-with-max13487e-and-raspberry-pi#comment1213079_477787">full</a>
or at least <a href="https://electronics.stackexchange.com/a/391861/24435">partial</a>
success with MAX13487E on the Electronics StackExchange. Another positive
references can be found
<a href="https://www.thethingsnetwork.org/forum/t/ttgo-esp32-modbus-rs485-node-anyone/20115/2">here</a>
and <a href="https://forums.ghielectronics.com/t/modbus-rs-485-rts/22746/6">here</a>.</p>
<h2 id="next-steps">Next steps</h2>
<p>The fun could also present itself as a random email response from the
IndustrialShields about how to do it properly in software, but since I have
tried so many things, my hopes for this scenario are quite low to be
honest. Hopefully autoflow will work when the ICs are replaced and I will
be able to use ModbusRTU on TouchBerry 10 without more time spent on the
issue.</p>
<p>This is a 95th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="http://www.deater.net/weave/vmwprod/hardware/pi-rts/">http://www.deater.net/weave/vmwprod/hardware/pi-rts/</a></li>
<li><a href="https://www.going-flying.com/blog/raspberry-pi-uart.html">https://www.going-flying.com/blog/raspberry-pi-uart.html</a></li>
<li><a href="https://pinout.xyz/pinout/pin13_gpio27#">https://pinout.xyz/pinout/pin13_gpio27#</a></li>
</ul>
Upgrading wiringpi on Raspberry Pi 42021-06-19T00:00:00+00:002021-06-19T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/upgrading-wiringpio-raspberry-pi-4/<p>I have had a problem with the <code>gpio</code> command on Raspberry Pi 4. Checking
it's version with <code>gpio -v</code> produced the following output:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>gpio version: 2.50
</span><span>Copyright (c) 2012-2018 Gordon Henderson
</span><span>This is free software with ABSOLUTELY NO WARRANTY.
</span><span>For details type: gpio -warranty
</span><span>
</span><span>Raspberry Pi Details:
</span><span> Type: Unknown17, Revision: 04, Memory: 0MB, Maker: Sony
</span><span> * Device tree is enabled.
</span><span> *--> Raspberry Pi 4 Model B Rev 1.4
</span><span> * This Raspberry Pi supports user-level GPIO access
</span></code></pre>
<p>It could not detect the board as can be seen by <code>Type: Unknown17</code>. Another
problem was that it was sitting at 2.50 refusing to update higher even with
the full upgrade:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> apt update && </span><span style="color:#bf616a;">sudo</span><span> apt full-upgrade
</span></code></pre>
<p>I have start solving this all, because trying to read the GPIO state was
unsuccessful:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">gpio</span><span> readall
</span></code></pre>
<p>And resulted in the following error:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>Oops - unable to determine board type... model: 17
</span></code></pre>
<p>I am not sure what the number 17 stands for right now, but it coincides
with the above unknown board type.</p>
<h2 id="manual-upgrade">Manual upgrade</h2>
<p>One of the ways to solve the situation is to upgrade the package manually:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">wget</span><span> https://project-downloads.drogon.net/wiringpi-latest.deb
</span><span style="color:#bf616a;">sudo</span><span> dpkg</span><span style="color:#bf616a;"> -i</span><span> wiringpi-latest.deb
</span></code></pre>
<p>We can receive a more sensible output out of <code>gpio -v</code> now:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>gpio version: 2.52
</span><span>Copyright (c) 2012-2018 Gordon Henderson
</span><span>This is free software with ABSOLUTELY NO WARRANTY.
</span><span>For details type: gpio -warranty
</span><span>
</span><span>Raspberry Pi Details:
</span><span> Type: Pi 4B, Revision: 04, Memory: 8192MB, Maker: Sony
</span><span> * Device tree is enabled.
</span><span> *--> Raspberry Pi 4 Model B Rev 1.4
</span><span> * This Raspberry Pi supports user-level GPIO access.
</span></code></pre>
<p>The version was updated, the type is now recognized and the memory is
correctly displayed as well.</p>
<h2 id="checking-state-of-gpios">Checking state of GPIOs</h2>
<p>Getting the GPIO state via <code>gpio readall</code> works as well:</p>
<p><img src="https://peterbabic.dev/blog/upgrading-wiringpio-raspberry-pi-4/gpio-readall-pi4b.png" alt="A screenshot on gpio readall command on Raspberry Pi 4B" /></p>
<p>For the sake of SEO and completeness, the table is split out into the left
and right sections below as well.</p>
<h3 id="left-side">Left side</h3>
<table><thead><tr><th>BCM</th><th>wPi</th><th>Name</th><th>Mode</th><th>V</th><th>Physical</th></tr></thead><tbody>
<tr><td></td><td></td><td>3.3v</td><td></td><td></td><td>1</td></tr>
<tr><td>2</td><td>8</td><td>SDA.1</td><td>ALT0</td><td>1</td><td>3</td></tr>
<tr><td>3</td><td>9</td><td>SCL.1</td><td>ALT0</td><td>1</td><td>5</td></tr>
<tr><td>4</td><td>7</td><td>GPIO. 7</td><td>IN</td><td>1</td><td>7</td></tr>
<tr><td></td><td></td><td>0v</td><td></td><td></td><td>9</td></tr>
<tr><td>17</td><td>0</td><td>GPIO. 0</td><td>IN</td><td>0</td><td>11</td></tr>
<tr><td>27</td><td>2</td><td>GPIO. 2</td><td>IN</td><td>0</td><td>13</td></tr>
<tr><td>22</td><td>3</td><td>GPIO. 3</td><td>IN</td><td>0</td><td>15</td></tr>
<tr><td></td><td></td><td>3.3v</td><td></td><td></td><td>17</td></tr>
<tr><td>10</td><td>12</td><td>MOSI</td><td>ALT0</td><td>0</td><td>19</td></tr>
<tr><td>9</td><td>13</td><td>MISO</td><td>ALT0</td><td>0</td><td>21</td></tr>
<tr><td>11</td><td>14</td><td>SCLK</td><td>ALT0</td><td>0</td><td>23</td></tr>
<tr><td></td><td></td><td>0v</td><td></td><td></td><td>25</td></tr>
<tr><td>0</td><td>30</td><td>SDA.0</td><td>IN</td><td>1</td><td>27</td></tr>
<tr><td>5</td><td>21</td><td>GPIO.21</td><td>IN</td><td>1</td><td>29</td></tr>
<tr><td>6</td><td>22</td><td>GPIO.22</td><td>IN</td><td>1</td><td>31</td></tr>
<tr><td>13</td><td>23</td><td>GPIO.23</td><td>IN</td><td>0</td><td>33</td></tr>
<tr><td>19</td><td>24</td><td>GPIO.24</td><td>IN</td><td>0</td><td>35</td></tr>
<tr><td>26</td><td>25</td><td>GPIO.25</td><td>IN</td><td>0</td><td>37</td></tr>
<tr><td></td><td></td><td>0v</td><td></td><td></td><td>39</td></tr>
</tbody></table>
<h3 id="right-side">Right side</h3>
<table><thead><tr><th>Physical</th><th>V</th><th>Mode</th><th>Name</th><th>wPi</th><th>BCM</th></tr></thead><tbody>
<tr><td>2</td><td></td><td></td><td>5v</td><td></td><td></td></tr>
<tr><td>4</td><td></td><td></td><td>5v</td><td></td><td></td></tr>
<tr><td>6</td><td></td><td></td><td>0v</td><td></td><td></td></tr>
<tr><td>8</td><td>1</td><td>ALT5</td><td>TxD</td><td>15</td><td>14</td></tr>
<tr><td>10</td><td>1</td><td>ALT5</td><td>RxD</td><td>16</td><td>15</td></tr>
<tr><td>12</td><td>0</td><td>IN</td><td>GPIO. 1</td><td>1</td><td>18</td></tr>
<tr><td>14</td><td></td><td></td><td>0v</td><td></td><td></td></tr>
<tr><td>16</td><td>1</td><td>OUT</td><td>GPIO. 4</td><td>4</td><td>23</td></tr>
<tr><td>18</td><td>1</td><td>OUT</td><td>GPIO. 5</td><td>5</td><td>24</td></tr>
<tr><td>20</td><td></td><td></td><td>0v</td><td></td><td></td></tr>
<tr><td>22</td><td>0</td><td>IN</td><td>GPIO. 6</td><td>6</td><td>25</td></tr>
<tr><td>24</td><td>1</td><td>OUT</td><td>CE0</td><td>10</td><td>8</td></tr>
<tr><td>26</td><td>1</td><td>OUT</td><td>CE1</td><td>11</td><td>7</td></tr>
<tr><td>28</td><td>1</td><td>IN</td><td>SCL.0</td><td>31</td><td>1</td></tr>
<tr><td>30</td><td></td><td></td><td>0v</td><td></td><td></td></tr>
<tr><td>32</td><td>0</td><td>IN</td><td>GPIO.26</td><td>26</td><td>12</td></tr>
<tr><td>34</td><td></td><td></td><td>0v</td><td></td><td></td></tr>
<tr><td>36</td><td>0</td><td>IN</td><td>GPIO.27</td><td>27</td><td>16</td></tr>
<tr><td>38</td><td>0</td><td>IN</td><td>GPIO.28</td><td>28</td><td>20</td></tr>
<tr><td>40</td><td>0</td><td>IN</td><td>GPIO.29</td><td>29</td><td>21</td></tr>
</tbody></table>
<p>There is still a problem however.</p>
<h2 id="alternate-function-for-a-gpio">Alternate function for a GPIO</h2>
<p>Trying to change the pin alternate function:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> pigs m 17 3
</span></code></pre>
<p>Fails with <code>socket connect failed</code> error. But this can be solved easily by
restarting the <code>pigpio</code> daemon, so it can pick up the upgraded files:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> systemctl restart pigpiod.service
</span></code></pre>
<p>Running the <code>sudo pigs m 17 3</code> produces no error. Verifying the state can
be done for example as follows:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">gpio</span><span> readall | </span><span style="color:#bf616a;">grep -e</span><span> BCM</span><span style="color:#bf616a;"> -e</span><span> ALT3</span><span style="color:#bf616a;"> -e</span><span> + | </span><span style="color:#bf616a;">head -5
</span></code></pre>
<p>Yeah, quite obtrusive, I know. I could not find a way to run <code>gpio readall</code>
for a single pin. The above outputs the following:</p>
<p><img src="https://peterbabic.dev/blog/upgrading-wiringpio-raspberry-pi-4/gpio-readall-pi4b-grep.png" alt="A line of gpio readdall containing the precise pin" /></p>
<p>Instead of IN, the mode for the pin 17 is now ALT3, which stands for RTS0.</p>
<h2 id="a-note-for-archlinux-arm">A note for ArchLinux ARM</h2>
<p>In case you fin yourself using ArchLinux ARM and need to manipulate GPIOs,
it should be less of a trouble. ArchLinux ARM has another repository baked
in called <code>alarm</code>, which is a shorthand for, you guessed it, ArchLinux ARM.
This repository is enabled by default and is there among official
repositories, in Arch named <code>core</code>, <code>extra</code>, <code>community</code> and <code>multilib</code>,
where the latter needs to be enabled first.</p>
<p>Getting hold of the GPIO related software is achievable like this:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">yay</span><span> gpio
</span></code></pre>
<p>The result can be understood easily with the following screenshot:</p>
<p><img src="https://peterbabic.dev/blog/upgrading-wiringpio-raspberry-pi-4/yay-gpio.png" alt="The result of yay gpio command on Archlinux ARM, the alarm repository can is present among the relevant results like community/gpio-utils, alarm/wiringpi or aur/pigpio" /></p>
<p>The <code>alarm</code> repository can is present among the relevant results like
<code>community/gpio-utils,</code> <code>alarm/wiringpi</code> or <code>aur/pigpio</code>.</p>
<p>This is a 94th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="http://wiringpi.com/wiringpi-updated-to-2-52-for-the-raspberry-pi-4b/">http://wiringpi.com/wiringpi-updated-to-2-52-for-the-raspberry-pi-4b/</a></li>
<li><a href="https://projects.drogon.net/raspberry-pi/wiringpi/the-gpio-utility/">https://projects.drogon.net/raspberry-pi/wiringpi/the-gpio-utility/</a></li>
</ul>
ModbusRTU for TouchBerry 10 pt.22021-06-18T00:00:00+00:002021-06-18T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/modbusrtu-on-touchberry-10-pt2/<p><strong>This post is a part of a series about the TouchBerry ModbusRTU autoflow
issue and how have I resolved it. Other posts of the series can be found
under the <a href="/tags/touchberry">touchberry</a> tag. Note that newer posts might
contain more recent information.</strong></p>
<p>It is quite hard to do a lot of effective work in this heat wave, but I
have pushed myself to do at least a bare minimum, apart from drinking water
and hiding from the sun. Thus continuing the work on enabling the full
potential of the
<a href="/blog/no-autoflow-rs485-touchberry-10/">TouchBerry 10's RS485 capabilities</a>
I have learned a few new things.</p>
<p>To effectively send the data over the RS485 bus, only the DE pin of the
UTRS485G chip has to be pulled HIGH. The RE pin should be pulled high in
the same time, but it is not strictly required for the transfer. In
reality, my Modbus IO device did change the coil state on the controller's
request, but could not acknowledge the controller (in this case the
TouchBerry) about doing so. Or in other words, with the RE pin HIGH and DE
pin in whatever state, the controller sees that the Modbus command timed
out, but the command was in fact received and executed by the listening
device.</p>
<p>This probably brings us closer to the reason anything I tried related to
RTS0 and autoflow did not work. Recalling the relevant bits from the
original post:</p>
<table><thead><tr><th>RPi pin</th><th>GPIO</th><th>ALT3</th><th>UTRS485G pin</th></tr></thead><tbody>
<tr><td>11</td><td>17</td><td>RTS0</td><td>RE</td></tr>
<tr><td>13</td><td>27</td><td>SD1_DAT3</td><td>DE</td></tr>
</tbody></table>
<p>As we can see, this design of the UPS shield is in fact really unfortunate,
as it would probably be better if these two paths were swapped. Having DE
pin attached to RTS0 alternate function could probably make RS485 automatic
direction control (or autoflow in short) work with just a GPIO alternate
function and possibly a <code>stty</code> command, maybe something as simple as could
suffice:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> pigs m 17 3
</span><span style="color:#bf616a;">sudo</span><span> stty</span><span style="color:#bf616a;"> -F</span><span> /dev/ttyS0 crtscts
</span></code></pre>
<p>But the reality leaves us with the GPIO17 tied to RE pin with nothing to
automatically drive the DE pin tied to GPIO27. I am really tempted to tie
both pins together to test the RTS0 autoflow hypothesis. I could very live
with such hack. Especially since the time of the final product where the
TouchBerry panel is mounted is nearing, something will have to be done. The
repairability of such a product with hacks of course goes down sharply, but
on the other hand, prototypes like this are built with hacks most of the
time anyway.</p>
<h2 id="rs485-autoflow-enabled-chip-alternative">RS485 autoflow enabled chip alternative</h2>
<p>Another possibility that I considered is to replace the basic UTRS485G chip
soldered onto the UPS shield inside the TouchBerry 10 with something more
advanced. By some chance I have stumbled upon IO cards from
<a href="https://www.embeddedpi.com/iocards">embeddedpi.com</a>. The first featured
one, a Half-duplex RS485 one called ISO-485 explains that it boasts a
<a href="https://www.maximintegrated.com/en/products/interface/transceivers/MAX13487E.html">MAX13487</a>.
This chip's description states:</p>
<blockquote>
<p>Half-Duplex RS-485/RS-422-Compatible Transceiver with <strong>AutoDirection
Control</strong></p>
</blockquote>
<p>Looking at its pins, it is almost entirely pin-to-pin compatible with
UTRS485G. The only exception is the pin 3, as SHDN is in place of the DE in
MAX13487E. Both chips can be used the same way however. Pulling the RE and
SHDN pins HIGH makes the MAX13487E seamlessly transfer <em>and</em> receive data
over the bus without requiring any other manual action, provided that the
software controls that two participants never speak at the same time. This
is of course solved with the client-server relation in Modbus protocol.
With UTRS485G, pulling both RE and DE high enables just transferring data
down the bus, disabling the receive functionality until both RE and DE pins
are pull LOW again.</p>
<p>My local distributor has some MAX13487E in the stock, but for instance
Maxim itself has none, with the 20 week lead time! Could
<a href="/blog/automotive-chip-disruption-events/">chip famine I also written about</a>
be the cause?</p>
<p>This is a 93th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
No autoflow for RS485 on TouchBerry 10?2021-06-17T00:00:00+00:002021-06-17T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/no-autoflow-rs485-touchberry-10/<p><strong>This post is a part of a series about the TouchBerry ModbusRTU autoflow
issue and how have I resolved it. Other posts of the series can be found
under the <a href="/tags/touchberry">touchberry</a> tag. Note that newer posts might
contain more recent information.</strong></p>
<p>With the access to the
<a href="https://www.industrialshields.com/shop/product/touchberry-pi-10-1-raspberry-pi-4b-1478">TouchBerry 10"</a>
model including RS485, I set to start controlling some ModbusRTU devices
with it. It has the M8, 3-pin connector. The center pin is connected to
supply voltage ground, so I assumed the remaining two are A and B for
RS485. As it turned out, there is absolutely no documentation on the topic
publicly available.</p>
<h2 id="rs485-pinout">RS485 pinout</h2>
<p>Fortunately, I was able to receive the email from the support quite
promptly. The email confirmed the pinout.</p>
<table><thead><tr><th>#</th><th>M8 position</th><th>RS485</th></tr></thead><tbody>
<tr><td>1</td><td>right</td><td>B-</td></tr>
<tr><td>2</td><td>left</td><td>A+</td></tr>
<tr><td>3</td><td>top</td><td>GND</td></tr>
</tbody></table>
<p>But with the GND sorted out, getting to A and B is not a problem. If not
working, just swap them and try again. With everything else correct, this
is sufficient procedure.</p>
<h2 id="the-ups-shield-pinout">The UPS shield pinout</h2>
<p>Subsequently, the email contained information about the UPS shield pinout
which hosts the RS-485 chip. The chip is Unisonic Technologies UTRS485G,
but I have never heard about them before. The UPS shield also hosts
<em>uninterrupted power supply</em> electronics (hence the name) and the
timekeeping chip DS3231 for the RTC capabilities.</p>
<p>The shield makes use of the full 40 pin header for the Raspberry Pi 4 being
the core of the TouchBerry 10", the details from the email are following:</p>
<table><thead><tr><th>Fn</th><th>Descr.</th><th>#</th><th>#</th><th>Descr.</th><th>Fn</th></tr></thead><tbody>
<tr><td></td><td>NC</td><td><strong>1</strong></td><td><strong>2</strong></td><td>Vin</td><td></td></tr>
<tr><td>SDA</td><td>GPIO2</td><td><strong>3</strong></td><td><strong>4</strong></td><td>Vin</td><td></td></tr>
<tr><td>SCL</td><td>GPIO3</td><td><strong>5</strong></td><td><strong>6</strong></td><td>GND</td><td></td></tr>
<tr><td></td><td>NC</td><td><strong>7</strong></td><td><strong>8</strong></td><td><strong>GPIO14</strong></td><td><strong>TXD</strong></td></tr>
<tr><td></td><td>NC</td><td><strong>9</strong></td><td><strong>10</strong></td><td><strong>GPIO15</strong></td><td><strong>RXD</strong></td></tr>
<tr><td><strong>RE</strong></td><td><strong>GPIO17</strong></td><td><strong>11</strong></td><td><strong>12</strong></td><td>NC</td><td></td></tr>
<tr><td><strong>DE</strong></td><td><strong>GPIO27</strong></td><td><strong>13</strong></td><td><strong>14</strong></td><td>GND</td><td></td></tr>
<tr><td></td><td>NC</td><td><strong>15</strong></td><td><strong>16</strong></td><td>GPIO23</td><td>UPS IN</td></tr>
<tr><td></td><td>NC</td><td><strong>17</strong></td><td><strong>18</strong></td><td>GPIO24</td><td>UPS OUT</td></tr>
<tr><td></td><td>NC</td><td><strong>19</strong></td><td><strong>20</strong></td><td>GND</td><td></td></tr>
<tr><td></td><td>NC</td><td><strong>21</strong></td><td><strong>22</strong></td><td>NC</td><td></td></tr>
<tr><td></td><td>NC</td><td><strong>23</strong></td><td><strong>24</strong></td><td>NC</td><td></td></tr>
<tr><td></td><td>GND</td><td><strong>25</strong></td><td><strong>26</strong></td><td>NC</td><td></td></tr>
<tr><td></td><td>NC</td><td><strong>27</strong></td><td><strong>28</strong></td><td>NC</td><td></td></tr>
<tr><td></td><td>NC</td><td><strong>29</strong></td><td><strong>30</strong></td><td>GND</td><td></td></tr>
<tr><td></td><td>NC</td><td><strong>31</strong></td><td><strong>32</strong></td><td>NC</td><td></td></tr>
<tr><td></td><td>NC</td><td><strong>33</strong></td><td><strong>34</strong></td><td>GND</td><td></td></tr>
<tr><td></td><td>NC</td><td><strong>35</strong></td><td><strong>36</strong></td><td>NC</td><td></td></tr>
<tr><td></td><td>NC</td><td><strong>37</strong></td><td><strong>38</strong></td><td>NC</td><td></td></tr>
<tr><td></td><td>GND</td><td><strong>39</strong></td><td><strong>40</strong></td><td>NC</td><td></td></tr>
</tbody></table>
<p>This was quite helpful, as the chip responsible for the RS485 communication
has to know which direction the data is flowing. It is an advanced feature
to design the overall system such that the user has this responsibility
abstracted away - so called autoflow or automatic flow control.</p>
<h2 id="using-rs485-interface-on-touchberry-10">Using RS485 interface on TouchBerry 10</h2>
<p>The importance for the autoflow can be apparent from the python script
provided as the third part of the email from the IndustrialShields support:</p>
<pre data-lang="python" style="background-color:#2b303b;color:#c0c5ce;" class="language-python "><code class="language-python" data-lang="python"><span style="color:#65737e;">#!/usr/bin/env python3
</span><span>
</span><span style="color:#65737e;"># IMPORTANT: remember to add "enable_uart=1" line to /boot/config.txt
</span><span>
</span><span style="color:#b48ead;">from </span><span>gpiozero </span><span style="color:#b48ead;">import </span><span>OutputDevice
</span><span style="color:#b48ead;">from </span><span>time </span><span style="color:#b48ead;">import </span><span>sleep
</span><span style="color:#b48ead;">from </span><span>serial </span><span style="color:#b48ead;">import </span><span>Serial
</span><span>
</span><span style="color:#65737e;"># RO <-> GPIO15/RXD
</span><span style="color:#65737e;"># RE <-> GPIO17
</span><span style="color:#65737e;"># DE <-> GPIO27
</span><span style="color:#65737e;"># DI <-> GPIO14/TXD
</span><span style="color:#65737e;">#
</span><span style="color:#65737e;"># VCC <-> 3.3V
</span><span style="color:#65737e;"># B <-> RS-485 B
</span><span style="color:#65737e;"># A <-> RS-485 A
</span><span style="color:#65737e;"># GND <-> GND
</span><span>
</span><span style="color:#65737e;"># enable reception mode
</span><span>re = </span><span style="color:#bf616a;">OutputDevice</span><span>(</span><span style="color:#d08770;">17</span><span>)
</span><span>de = </span><span style="color:#bf616a;">OutputDevice</span><span>(</span><span style="color:#d08770;">27</span><span>)
</span><span>
</span><span>de.</span><span style="color:#bf616a;">off</span><span>()
</span><span>re.</span><span style="color:#bf616a;">off</span><span>()
</span><span>
</span><span style="color:#b48ead;">with </span><span style="color:#bf616a;">Serial</span><span>('</span><span style="color:#a3be8c;">/dev/ttyS0</span><span>', </span><span style="color:#d08770;">19200</span><span>) </span><span style="color:#b48ead;">as </span><span>s:
</span><span> </span><span style="color:#b48ead;">while </span><span style="color:#d08770;">True</span><span>:
</span><span> </span><span style="color:#65737e;"># waits for a single character
</span><span> rx = s.</span><span style="color:#bf616a;">read</span><span>(</span><span style="color:#d08770;">1</span><span>)
</span><span>
</span><span> </span><span style="color:#65737e;"># print the received character
</span><span> </span><span style="color:#96b5b4;">print</span><span>("</span><span style="color:#a3be8c;">RX: </span><span style="color:#d08770;">{0}</span><span>".</span><span style="color:#bf616a;">format</span><span>(rx))
</span><span>
</span><span> </span><span style="color:#65737e;"># wait some time before echoing
</span><span> </span><span style="color:#bf616a;">sleep</span><span>(</span><span style="color:#d08770;">0.1</span><span>)
</span><span>
</span><span> </span><span style="color:#65737e;"># enable transmission mode
</span><span> de.</span><span style="color:#bf616a;">on</span><span>()
</span><span> re.</span><span style="color:#bf616a;">on</span><span>()
</span><span>
</span><span> </span><span style="color:#65737e;"># echo the received character
</span><span> s.</span><span style="color:#bf616a;">write</span><span>(rx)
</span><span> s.</span><span style="color:#bf616a;">flush</span><span>()
</span><span>
</span><span> </span><span style="color:#65737e;"># disable transmission mode
</span><span> de.</span><span style="color:#bf616a;">off</span><span>()
</span><span> re.</span><span style="color:#bf616a;">off</span><span>()
</span></code></pre>
<p>To send the data over the RS485 interface, the RE and DE pins of the chip
has to be pulled HIGH, the data should be transmitted, and then both RE and
DE pins should be pulled LOW immediately to receive the response. This is
very unfortunate as most Modbus implementations assume the autoflow, as is
also the case <a href="/blog/using-mbpoll-as-cli-for-modbus/">when using mbpoll</a>.</p>
<h2 id="missing-autoflow">Missing autoflow?</h2>
<p>It is basically a lot of hassle to not have the autoflow functionality
rendering most standard Modbus implementations unusable. I have searched
quite hard for the way how to do this in some nice automated way on the RPi
software on some application or kernel level, visiting search term results
like alternative GPIO functions, dtoverlay, RTS0, uart-ctsrts and serial
sniffers but could not find the solution yet. Some of the most promising
are referenced below among other relevant links.</p>
<p>The GPIO17 has the alternative function of RTS0 at ALT3, which is connected
to the RE pin - this is hopefully done by design. There is however no such
function available for the GPIO27 connected to the RE pin. In reality, both
DE and RE should be tied together, controlled by the single RTS0 pin, that
is in turn driven by GPIO14 being TXD0. I have no idea why the RE pin is
separate. I have to hook it up to the oscilloscope to learn more. Before
touching the soldering iron, I also have to wait for another support email
response - maybe they provide some nice information. In the end, their
<a href="https://www.industrialshields.com/blog/raspberry-pi-for-industry-26/post/how-to-work-with-rs485-with-a-raspberry-plc-275">RPi based PLC has <code>/dev/ttySC0</code> and <code>/dev/ttySC1</code> for both RS485 channels</a>.</p>
<p>This is a 92th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="http://www.unisonic.com.tw/datasheet/UTRS485.pdf">http://www.unisonic.com.tw/datasheet/UTRS485.pdf</a></li>
<li><a href="https://www.industrialshields.com/web/content?model=ir.attachment&field=datas&id=137792&">https://www.industrialshields.com/web/content?model=ir.attachment&field=datas&id=137792&</a></li>
<li><a href="https://ethertubes.com/raspberry-pi-rts-cts-flow-control/">https://ethertubes.com/raspberry-pi-rts-cts-flow-control/</a></li>
<li><a href="https://raspberrypi.stackexchange.com/a/32504/59436">https://raspberrypi.stackexchange.com/a/32504/59436</a></li>
<li><a href="https://widgetlords.com/pages/rs485">https://widgetlords.com/pages/rs485</a></li>
<li><a href="https://www.raspberrypi.org/documentation/hardware/raspberrypi/bcm2711/rpi_DATA_2711_1p0_preliminary.pdf">https://www.raspberrypi.org/documentation/hardware/raspberrypi/bcm2711/rpi_DATA_2711_1p0_preliminary.pdf</a></li>
</ul>
Cross-compiling vs cross-compiling2021-06-16T00:00:00+00:002021-06-17T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/cross-compiling-vs-cross-compiling/<p>This topic is drives me for some time. Cross-compiling is actually a
misnomer. Or and ambiguous term. Whatever. Basically, cross-compiling means
at least two different things in different peoples minds.</p>
<p>The first group think of cross-compiling as a process, after which an
executable is created from the source code for <strong>different operating
systems</strong>, sometimes referred to as a <em>cross-platform compilation</em>. For
instance for Windows, MacOS, Linux and possibly others. The resulting
executable is able to run on one specific CPU architecture, most of the
time, the same one the compilation was performed on. This is for example
the case of <a href="https://github.com/vercel/pkg">pkg</a> from Vercel (formerly
Zeit), used from making an executable out of NodeJs source code.</p>
<p>The second group think of cross-compilation as making an executable that
can be run on <strong>different CPU architectures</strong>. Most of the times, compiling
on the x86_64 architecture, but meaning to run the resulting executable on
the ARM architecture (either 32 or 64 bit). The example of this is the case
of compilers that fall to the same category as <code>aarch64-linux-gnu-gcc</code> for
C family languages or for the projects like
<a href="https://github.com/rust-embedded/cross">cross</a> for Rust. There are of
course others available, I know golang can do that elegantly as well.</p>
<h2 id="why-is-this-important">Why is this important?</h2>
<p>I am writing about this because, I had dived into the topic
<a href="/blog/cross-package-node-app-arm-qemu-docker/">quite extensively some time ago</a>.
With my latest <a href="/blog/trying-tauri-with-svelte/">discovery of tauri</a> I
tried to understand it can do cross-compilation for a <strong>different
architecture</strong> in addition to cross-platform compilation it already
supports. There is <a href="https://github.com/tauri-apps/tauri/issues/941">#941</a>
and <a href="https://github.com/tauri-apps/tauri/pull/491">PR#491</a> and definitely
some others, most closed with "not going to happen anytime soon" message.
Sad.</p>
<p>With all the complications, it looks like I will be better of buying some
ARM laptop for shipping applications based on Javascript on the embedded
systems.</p>
<p>This is a 91th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Using mbpoll as a CLI for Modbus2021-06-15T00:00:00+00:002021-06-17T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/using-mbpoll-as-cli-for-modbus/<p>I have discovered a nice CLI tool called
<a href="https://github.com/epsilonrt/mbpoll">mbpoll</a> than can be used for fast
Modbus wiring validation via CLI. It is very handy for anything that has an
Ethernet port for the ModbusTCP, including devices like Raspberry Pi based
controllers industrial controllers.</p>
<p>Additionally, if any kind of RS485 interfacing is available, then it can be
used for ModbusRTU as well. I was successfully able to use it with a cheap
CH340 chip based USB-to-RS485 dongle and with
<a href="https://revolution.kunbus.com/revpi-connect/">RevPi Connect</a> with its
integrated RS485 terminals.</p>
<h2 id="compiling-mbpoll-on-raspberry-pi-4">Compiling mbpoll on Raspberry Pi 4</h2>
<p>Although the mbpoll is not available from the standard repositories, the
<a href="https://github.com/epsilonrt/mbpoll/blob/master/README.md">README</a> does
well enough job explaining what to do, with some twists. In short:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> apt-get install cmake pkg-config libmodbus-dev
</span><span style="color:#bf616a;">git</span><span> clone https://github.com/epsilonrt/mbpoll.git
</span><span style="color:#96b5b4;">cd</span><span> mbpoll
</span><span style="color:#bf616a;">mkdir</span><span> build
</span><span style="color:#96b5b4;">cd</span><span> build
</span><span style="color:#bf616a;">cmake</span><span> ..
</span><span style="color:#bf616a;">make</span><span> package
</span><span style="color:#bf616a;">sudo</span><span> dpkg</span><span style="color:#bf616a;"> -i</span><span> mbpoll_1.4.25_armhf.deb
</span></code></pre>
<p>The condition for this is that <code>libmodbus-dev</code> is >= v3.1.4. At the time of
writing, on my device, it was at this exact version, so no problems here.
It is possible to check beforehand:</p>
<p><code>apt-cache show libmodbus-dev</code></p>
<p>Otherwise, compiling <code>libmodbus</code> from source is required as well.</p>
<h2 id="example-commands">Example commands</h2>
<p>To set the coil 4 (3 on devices that start counting from 0) on the device
with Modbus address 7 to the state HIGH with the USB dongle, this command
can be used:</p>
<p><code>mbpoll -t 4 -a 7 -b 9600 -P none /dev/ttyUSB0 1</code></p>
<p>For the completeness, to set the same coil to the state LOW:</p>
<p><code>mbpoll -t 4 -a 7 -b 9600 -P none /dev/ttyUSB0 0</code></p>
<p>Both commands assume that the slave is a ModbusRTU device, communicating
with the 9600 baud rate with the options 8N1. For the record, the ModbusRTU
slave used was
<a href="https://papouch.com/quido-rs-8-8-8-vstupu-8-vystupu-a-teplomer-p4667/?currency=eur">Qiudo RS 8/8</a>.</p>
<p>This is a 90th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Should cabinet door contain 230VAC elements?2021-06-14T00:00:00+00:002021-06-14T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/shoud-cabinet-door-contain-230vac/<p>With the current task of
<a href="/blog/first-impressions-qelectrotech-wiring-diagrams/">wiring an electrical cabinet</a>
I was presented with the opinion, that we should use 230VAC powered buttons
and indicators, because in our specific case it would be cutting the
corners, meaning less time and resources required for the given task.</p>
<p>I have found out that it could be done comfortably with a handful of safe,
off-the-shelf components, but I opposed the idea. Strapping 230VAC on the
cabinet door means much less flexibility. If I wanted another indicator
there, this would almost certainly be 24VDC, so other wires would need to
be brought there anyway.</p>
<p>If someone wanted to extend the reach of that button or that LED indicator
somewhere else for a better accessibility, they would have to tap that high
voltage off and bring it else. Have you seen a small push-button with
230VAC attached to it? Not as common.</p>
<p>But have you seen single miniature 230VAC LED? Unless I specifically
checked its package or maybe even its datasheet, it would not occurred to
me in the slightest. And I doubt I am the only one here.</p>
<p>And the costs saved would in the end amount for 2 components worth 40 EUR
combined. Not worth cutting corners here to not convert the signals on the
cabinet door to 24VDC.</p>
<p>This is a 89th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="http://www.plctalk.net/qanda/showthread.php?t=2367">http://www.plctalk.net/qanda/showthread.php?t=2367</a></li>
<li><a href="https://www.electriciansforums.net/threads/emergency-stop-circuit.120296/">https://www.electriciansforums.net/threads/emergency-stop-circuit.120296/</a></li>
</ul>
Giving up hope on svelte-kit2021-06-13T00:00:00+00:002021-06-14T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/giving-up-hope-on-svelte-kit/<p>Casually checking on status of <a href="https://kit.svelte.dev/">svelte-kit</a> after
a few weeks of not doing so when I returned from the holiday. The result
was not good. The open issues for the
<a href="https://github.com/sveltejs/kit/issues?q=is%3Aopen+is%3Aissue+milestone%3A1.0">1.0 milestone</a>
just pile up. It feels like it will never be released.</p>
<p>I have got involved in the
<a href="https://github.com/sveltejs/kit/issues/733">#733</a> due to
<a href="/blog/insights-google-search-console/">traling slash discrepancy</a> I have
discovered in Sapper a while ago and there I learned the team probably
tries to cram too many features in. Especially trying to somehow serve
every major serverless frontend platform. There is quite a lot of of them
and they do evolve quite fast, as is the norm with the web, so the goal
seems pretty elusive.</p>
<p>This makes me sad a little bit as I would really love to see
<a href="https://github.com/svelte-add/tailwindcss">svelte-add/tailwind</a> used
somewhere. When I last tried it in February it worked like a charm and it
was really fast! Wonder how this all gets sorted out.</p>
<p>This is a 88th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Trying tauri with svelte2021-06-12T00:00:00+00:002021-06-14T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/trying-tauri-with-svelte/<p>I had tried <a href="https://github.com/tauri-apps/tauri">tauri</a> and was pretty
pleased by the result. Tauri produces a single binary with the UI made in
web frontend technologies we love to hate like React, Vue or even Svelte.
It's magic is done in Rust.</p>
<p>Tauri's main competitor appears to be Electron. Although Electron is quite
popular, and I have used it in production once, it's reputation seems to be
plagued by large memory footprint and security vulnerabilities. On the
other hand, everything Rust related is marketed more secure.</p>
<p>The bootstrapping can be done with
<a href="https://github.com/tauri-apps/tauri/tree/dev/tooling/create-tauri-app">create-tauri-app</a>.
It asks to take a look at the
<a href="https://tauri.studio/en/docs/getting-started/setup-linux/">system requirements</a>
and Arch is included, but I remember having already in place. I do use some
Rust related software that has to be built, for instance
<a href="https://aur.archlinux.org/packages/paru/">paru</a>, so maybe this could be
related.</p>
<p><code>npx create-tauri-app</code></p>
<p>For Svelte, my JS frontend of choice, one has to first choose
<code>@vitejs/create-app</code> and the submenu offers <code>svelte</code> and <code>svelte-ts</code>, for
typescript as well. The build process outputs the following error:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>>> Running final command(s)
</span><span>internal/modules/cjs/loader.js:883
</span><span> throw err;
</span><span> ^
</span><span>
</span><span>Error: Cannot find module '~/tauri/sdfsfd/node_modules/esbuild/install.js'
</span><span> at Function.Module._resolveFilename (internal/modules/cjs/loader.js:880:15)
</span><span> at Function.Module._load (internal/modules/cjs/loader.js:725:27)
</span><span> at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:72:12)
</span><span> at internal/main/run_main_module.js:17:47 {
</span><span> code: 'MODULE_NOT_FOUND',
</span><span> requireStack: []
</span><span>}
</span><span>Error with command: node
</span><span>Error: Error: Command failed with exit code 1: node ./node_modules/esbuild/install.js
</span><span> at ~/.npm/_npx/14052/pnpm-global/4/node_modules/.pnpm/create-tauri-app@1.0.0-beta.1/node_modules/create-tauri-app/dist/index.js:63:15
</span><span> at Generator.throw (<anonymous>)
</span><span> at rejected (~/.npm/_npx/14052/pnpm-global/4/node_modules/.pnpm/create-tauri-app@1.0.0-beta.1/node_modules/create-tauri-app/dist/index.js:40:65)
</span><span> at processTicksAndRejections (internal/process/task_queues.js:93:5)
</span></code></pre>
<p>I have yet to find how to get rid of it, but I tried to move forward in
spite of it:</p>
<p><code>pnpm install</code></p>
<p>No problems here. Finally:</p>
<p><code>pnpm run tauri build</code></p>
<p>The build time on my machine is quite long:</p>
<p><code>time pnpm run tauri build</code></p>
<p>Resulted in <code>145.47s user 1.42s system 298% cpu 49.285 total</code>.</p>
<p>Let's check the built executable's size:</p>
<p><code>du -h ./src-tauri/target/release/tarui-app</code></p>
<p>My system outputted <code>26M</code>. I tried to run it:</p>
<p><code>./src-tauri/target/release/tarui-app</code></p>
<p>It worked even though the above error. I hope I will have the opportunity
to explore more of the tauri features soon.</p>
<p>This is a 87th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Stepper motors: 2-phase and 3-phase2021-06-11T00:00:00+00:002021-06-11T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/stepper-motors-2-phase-3-phase/<p>Machine movements are generally made either by pneumatic valves or by
electro-magnetic motors. Omitting the pneumatics for now, the
electro-magnetic motors generally further divide to alternating-current or
AC in short and direct-current, or DC in short. So far so good.</p>
<p>Anyway, AC motors are high power, heavy-duty devices. Heavy things are hard
to get moving and have high inertia once they are already set in motion,
meaning they do not really like to change their movement too much. This
means AC motors have little place in delicate little movements you are
probably interested in here. This leaves us with the DC motors group.</p>
<h2 id="a-humble-dc-motor">A humble DC motor</h2>
<p>Now the taxonomy gets a little trickier, and I just want to get to one
specific motor type as you may already picked up from the title. Most
prominent players here are permanent magnet brushed DC motors, which are in
fact the ones that are usually called simply DC motors because not only
they use DC current, the direction of their rotation is determined by the
direction of the current that passes through it. Pass the current through
it and it rotates. Simple as that, this is a rare property regarding
motors.</p>
<p>Then there are the brush-less DC motors, or BLDC in short to easily
differentiate between the two, as they quite different inside. Remember,
there are quite a few other specialized DC motors, I will again omit here
and lastly there are stepper motors. Phew, finally.</p>
<h2 id="a-mighty-stepper">A mighty stepper</h2>
<p>Stepper motors use direct current too, but they do not rotate quite that
easily. Their main specificity regarding the physical construction is that
they have multiple windings that can be depending on the stepper
construction be mapped to one or multiple so called phases. A phase is in
turn mapped to one, two or more steps per full revolution. Thus the
stepper.</p>
<p>Why I am writing this? Well I have been experimenting with some stepper
drivers lately, as I have already hinted in my
<a href="/blog/understanding-pulse-ouputs-mduino-38ar-plus/">previous post about pulse ouputs</a>.
Just today I have learned that a 2-phase motor has a very close relative,
which is a 3-phase stepper motor. I have already known about the <em>unipolar</em>
and <em>bipolar</em> types, but this is again a different taxonomy I have kept for
this opportunity. Yeah I know, with so many different types and groups,
motors are complicated.</p>
<p>The difference between an unipolar a bipolar are quire nicely explained in
many other publications already, so I will not do it here. But the fact the
there are 3-phase stepper motors eluded me so far, so I was really
surprised seeing the LCDA357H driver with just three terminals to connect
the motor labeled U, V and W, instead of A+, A-, B+ and B- for the 2-phase
drivers. I have yet to compare both in real application. But a pair of
3-phase kits is already on the way. Both the drivers and the steppers are a
little bit more expensive than their 2-phase counterparts but just looking
around the Internet, the 3-phase offers many advantages. I'll post the
updates on the topic soon.</p>
<p>This is a 86th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Repeat find and till in vim2021-06-10T00:00:00+00:002021-06-10T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/repeat-find-and-till-in-vim/<p>There are two very useful keys for navigation in vim: <code>f</code>ind and <code>t</code>ill. I
am using both daily and these are one of the keys I have learned somewhere
at the very beginning of my vim learning curve.</p>
<p>To explain briefly, there is a group of a so called motion keys, performing
a cursor movement - the infamous <code>hjkl</code> keys are also part of the motion
keys club. Pressing <code>f</code> followed by any character moves cursor <em>to</em> the
character. Similarly, pressing <code>t</code> followed by any character moves the
cursor <em>before</em> the character. In both situations, the movement only
happens <em>on the same line</em>. If the character is not present, nothing
happens. Very useful for navigating.</p>
<p>Both these have the counterpart of moving backwards, so <code>F</code> moves the
cursor to the left stopping <em>at</em> the character, while <code>T</code> moves cursor to
the left, stopping just <em>before</em> the character.</p>
<h2 id="using-count-with-find-and-till-keys">Using count with find and till keys</h2>
<p>Each one of <code>f</code>ind, <code>F</code>ind, <code>t</code>till and <code>T</code>ill support <code>[count]</code>, so
pressing a number before them will move to the n-th ocurence, for example
<code>2fa</code> would move the cursor to the second nearest <code>a</code> character on the same
line to the right. What a news!</p>
<p>Now the above fact might be obvious to many vim users, so I may even feel
ashamed for writing about it, but hey, it was a todays revelation for me,
so I decided to blog about it anyway. For the record, it is all documented,
just look at <code>:h f</code> and <code>:h t</code>. Sometimes I feel like reading the vim help
to learn that sequences like these exists, but this is actually something I
have found out about by an accident. Yeah, yeah, vim accidents. Discovering
features by mistyping key sequences. I wonder how many times this happens
till I admit I do not know vim at all after these years.</p>
<h2 id="repeating-the-last-find-or-till">Repeating the last find or till</h2>
<p>There is however one additional key to all this that may not be as obvious
and it is <code>;</code> or the mighty semicolon. The same way the dot <code>.</code> repeats
last action, the semicolon <code>;</code> repeats last motion performed by <code>f</code> and
<code>t</code>. What's more, the semicolon's counterpart is a comma <code>,</code> and it is
performing the motion in the <em>opposite</em> direction.</p>
<p>I have not known about these two either and learned about it from the help
and it seems pretty useful too. Yet I was always wondering what the <code>;</code> and
<code>,</code> keys do. And it is so easy to find out, just <code>:h ;</code> or <code>:h ,</code>.</p>
<p>Okay, okay. I admit it. I do not know vim even after all these years of
daily use. And I do not plan to stop learning it anytime soon.</p>
<p>This is a 85th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://vimhelp.org/motion.txt.html#f">https://vimhelp.org/motion.txt.html#f</a></li>
<li><a href="https://stackoverflow.com/questions/12495442/what-do-the-f-and-t-commands-do-in-vim">https://stackoverflow.com/questions/12495442/what-do-the-f-and-t-commands-do-in-vim</a></li>
</ul>
Feelings about the writing break2021-06-09T00:00:00+00:002021-06-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/feelings-about-writing-break/<p>My pause from the daily writing for a week due to holiday led me to
understand how the habit of doing something every day for almost three
months rooted into my mind.</p>
<p>I was rely reluctant to break my chain of writing and I was determined to
do a <a href="https://100daystooffload.com">#100daystooffload</a> challenge in just
100 days, which I consider the ultimate version of this challenge. The
original 100daystooffload challenge requires to write 100 posts over the
span of 365 days, so it is a significant increase in publishing frequency.
The original challenge requires to write only roughly quarter of the post
(27,39% of it to be almost exact) of the post per day when writing daily,
or write a full post (quite obviously) once every 3.65 days.</p>
<h2 id="reasons-to-break-a-chain">Reasons to break a chain</h2>
<p>I must admit I have seriously considered postponing the holiday a few weeks
later or even taking my laptop with me. I had however refrained from doing
both of the proposed options. The place I went for a relax with my beloved
girlfriend was of high importance filled with a positive nostalgia. I could
even re-book the flight for later without additional costs, which would
comfortably aided me with the challenge.</p>
<p>Choosing to write at the holiday could be done too. Without external
preparation I probably
<a href="/blog/understanding-single-drone-per-vps-limitation/">could not write from the phone</a>,
and even with the working setup in place, it could maybe just a very short
posts due to lack of physical keyboard and my reluctance to write anything
on the phone. This is however not a traveler's blog, and such blogs are
usually filled with stunning pictures. Not that we were not taking any
photos, but this is simply another topic entirely.</p>
<p>Taking the laptop with me would be possible, but then it would take
precious space in the backpack also adding in the weight. I would also have
to worry about it all the time, I would have to think about writing and
last but not the least, do the actual writing. Given the fact, that out
itinerary was very densely packed for every day, moving from place to
place, it would just add more hassle.</p>
<h2 id="why-failing-is-not-a-problem">Why failing is not a problem</h2>
<p>Since I have failed my ultimate challenge of writing 100 posts in 100
consecutive days, I could be hard on myself, but I am not. The holiday was
great, I gathered a lot of new energy, made a greater connection with
myself and with my girlfriend and created a lot of beautiful memories.</p>
<p>This all would be negatively affected were I to push myself, and for what?
For some artificially created challenge. I have started doing
100daystooffload on 11th of March this year, I still have 273 days to
finish the original challenge and I just need 15 more posts to do so.</p>
<p>Also, my first humble blog post (there were two posts actually at that
date) here was published 13th of July, so just three days short of a year
ago. The blog already boasts 111 posts to date, so my 100th post within the
war was written on 22th of May already, so I could just call it a day
anytime. I do however plan to keep writing daily. Let's see what future
will bring.</p>
<p>This is a 84th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Holiday break for a week2021-06-01T00:00:00+00:002021-06-01T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/holiday-break-for-week/<p>I am going to a much deserved holiday break, so no posts for a week. I was
considering taking my notebook with, but then I thought, I need to take a
break from the work altogether, as I am not sure when will I be able to
travel again. So instead of a weeks worth of half-baked posts without any
technicalities altogether, I am instead channeling all my energy towards
relax. I will be writing again as soon as I am back home. See ya!</p>
<p>This is a 83th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Understanding pulse outputs of M-Duino 38AR+2021-05-31T00:00:00+00:002021-05-31T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/understanding-pulse-ouputs-mduino-38ar-plus/<p>When the need to drive a stepper motor arises, one can turn to the
microcontroller. For the task, I am currently using
<a href="https://www.industrialshields.com/shop/product/is-mduino-38ar-m-duino-plc-arduino-ethernet-38ar-i-os-analogico-digital-rele-plus-12">M-Duino AR38+</a>
and I have already written something about it a few days back, link
<a href="/blog/m-duino-external-voltage-reference-trap/">here</a>.</p>
<p>Note that an Arduino based controller is suitable for the task because it
can generate fast pulses on it's pins, that can be fed directly to the
stepper motor driver. And by fast, we are talking 200kHz, which is an upper
limit of the driver I am using, an JDK5056S. There are other drivers with
this same form factor, being a clones of one or another, with the labels
like JK1545, TB6600, DM542, SH-8611A, CW8060 and similar. There is also a
slightly more advanced one with the type HSS86. All these are designed to
drive stepper motors of type NEMA 23, NEMA 34 and so on.</p>
<p>A 200kHz switching frequency is still slow as the output pulses on AR38+
can probably go up to 4MHz. But 200kHz is fast, compared to for instance
<a href="https://revolution.kunbus.com/">RevPi</a>, which can switch it's output pins
with the frequency of 200Hz, thus being
<a href="https://revolution.kunbus.de/forum/viewtopic.php?t=967">unsuitable for a stepper motor application</a>.</p>
<h2 id="ar38-pins-with-pulse-outputs">AR38+ pins with pulse outputs</h2>
<p>MDuino supports pulse outputs on these pins, where the ones that are
available on AR38+ are shown in <strong>bold</strong>:</p>
<ul>
<li>TIMER0: <strong>Q0.5</strong> and Q2.6</li>
<li>TIMER1: Q2.5</li>
<li>TIMER2: Q1.5 (Multiply the frequency x2)</li>
<li>TIMER3: <strong>PIN2</strong>, <strong>PIN3</strong> and <strong>Q0.6</strong></li>
<li>TIMER4: <strong>Q0.7</strong>, Q1.6 and Q1.7</li>
<li>TIMER5: Q1.3, Q1.4 and Q2.0</li>
</ul>
<p>There are two more limitations:</p>
<ol>
<li>When TIMER0 pulse output is used, Arduino functions as <code>delay()</code>,
<code>millis()</code>, <code>micros()</code>, <code>delayMicroseconds()</code> and other in this category
stop working as intended, because they rely on that timer.</li>
<li>It is not possible to have different frequency on the pins tied to the
same TIMER.</li>
</ol>
<p>From the list above and taking the above limitations into consideration it
is apparent, that without any other additional parts, AR38+ can drive four
separate drivers with two different frequencies at the same time. First
three being pin 2, pin 3 and Q0.6 sharing the frequency and Q0.7 being the
fourth.</p>
<h2 id="closing-notes">Closing notes</h2>
<p>Note that Q0.5, Q0.6 and Q0.7 are a PWM/Analog Output pins. Using these for
the steppers limits the available pins with the same functionality from six
to three, the other three being Q1.0, Q1.2 and Q1.3, this is something to
keep in mind.</p>
<p>Also note that Q0.5 was omitted in from the consideration, even thou it is
supporting the pulse output, as to not mess with the time related
functions. They are not strictly required, but it is unclear to me at this
point <em>how</em> they are affected. Having unexpected behavior on something that
can even cause harm is not recommended.</p>
<p>Overall, M-Duino 38AR+ is well suited for interacting with the stepper
drivers and the overall experience for me is smooth and reasonably
documented. Having the Ethernet included in the package makes it a very
capable companion for the price sensitive industrial machines.</p>
<p>This is a 82th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
<h2 id="links">Links</h2>
<ul>
<li>https://github.com/IndustrialShields/arduino-Tools40#pulses</li>
<li>https://www.industrialshields.com/blog/arduino-industrial-1/post/stepper-motor-speed-control-using-an-arduino-based-plc-and-a-rotary-encoder-64</li>
</ul>
First impressions: QElectroTech wiring diagrams2021-05-30T00:00:00+00:002021-05-30T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/first-impressions-qelectrotech-wiring-diagrams/<p>Facing against the task to document the wiring of the control cabinet I had
to choose which software I would do it with. Naturally I have started to
look for open source options first.</p>
<p>Even though I have already achieved some level of proficiency in KiCAD, it
is not very well suited for the cabinet wiring. Most recommendations in
this space went to <a href="https://qelectrotech.org/">QElectroTech</a>. So I have
tried. Here are my first impressions.</p>
<p>The advantage is that there are many relevant schematic elements available
in so called "collections" (think libraries). They can be searched and the
search is fast. But you have to know what you are looking for and some
elements are there in different languages, Czech included.</p>
<p>Another big plus for me was that the resulting file is a single
human-readable file, actually an XML. Being just a single file and what is
more important, not being in the binary form is the ideal format for the
version control systems like git. I love that.</p>
<p>The element editor works reasonably well but I find myself struggle with it
quite a lot. I cannot shake off the feeling that I could make custom
elements much faster in DipTrace or in KiCAD, but it might be due to fact
that I have just started with QElectroTech, time could tell better.</p>
<p>Another problem I have found is the bug in the version v0.80, which makes
updating the components quite a pain. To re-render them, one has to save
the document, close it and open it again. What is wore, if the component
was previously wired, it will simply disappear. Very frustrating. Maybe
there is some kind of workaround but even searching could not provide very
good guidelines for this problem, so I am not sure.</p>
<p>Another big problem for me are keyboard shortcuts. I feel like they are
almost non-existent and everything has to be clicked on. The experience
feels very sluggish, but this may also be due to my lack of experience. I
could however not find any window describing the shortcuts and hovering
over the icons to show the tooltip displays the shortcut on very few
occasions.</p>
<h2 id="verdict">Verdict</h2>
<p>QElectroTech is relatively capable tool for making wiring diagram
documentation a reality and it is open source, which can be seen by some as
a plus, but either it has a steeper learning curve or it's overall
proficiency as a tool is not very high. After around 12 hours of usage I
felt like my progress drawing wiring diagrams is too slow. I am however
sticking with it for now, because it appears to do the job, which is the
most important aspect for any tool.</p>
<p>This is a 81th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Change NetworkManager connection priority2021-05-29T00:00:00+00:002021-05-29T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/change-networkmanager-connection-priority/<p>Dealing with the situation that some LAN network consisting of multiple
connected devices without the access to the Internet needs some work, while
simultaneously requiring access the Internet from my laptop over the
wireless.</p>
<p>The problem is that NetworkManager prioritizes wired paths for the access
to the Internet over wired networks. The actual path priority can be shown
using the <code>route</code> command:</p>
<p><code>route -n</code></p>
<p>The output on my machine confirms that the wired connection on the
interface <code>enp0s31f6</code> takes precedence:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>Kernel IP routing table
</span><span>Destination Gateway Genmask Flags Metric Ref Use Iface
</span><span>0.0.0.0 192.168.20.1 0.0.0.0 UG 10 0 0 enp0s31f6
</span><span>0.0.0.0 192.168.2.1 0.0.0.0 UG 3003 0 0 wlp4s0
</span><span>192.168.2.0 0.0.0.0 255.255.255.0 U 3003 0 0 wlp4s0
</span><span>192.168.20.0 0.0.0.0 255.255.255.0 U 10 0 0 enp0s31f6
</span><span>192.168.250.0 0.0.0.0 255.255.255.0 U 425 0 0 anbox0
</span></code></pre>
<h2 id="modifying-routes">Modifying routes</h2>
<p>The entries are sorted by the priority from the most preferred to the
least. The proper solution would be to learn to modify the routes using the
<code>route</code> command we used to print the routes out. In a hurry, I have
resorted to the hacky solution that wraps away the hard parts. Enter
<code>ifmetric</code> command:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">yay -S</span><span> ifmetric
</span></code></pre>
<p>No the priorities can be changed straight away without the need for
understanding anything else:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> ifmetric enp0s31f6 10
</span><span style="color:#bf616a;">sudo</span><span> ifmetric wlp4s0 9
</span></code></pre>
<p>The first line is unnecessary as the 10 is the priority assigned to the
wired network automatically, but it is included here for a good measure.</p>
<p>The priority numbers chosen are arbitrary, the only important bit is that
the lower the number, the higher the position in the IP routing table,
meaning the higher the priority to choose which interface is used for the
Internet access. We can confirm now that the wireless interface <code>wlp4s0</code>
has the highest priority, providing Internet access over the wireless
connection while simultaneously permits access to the devices on the
isolated network over LAN:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>Destination Gateway Genmask Flags Metric Ref Use Iface
</span><span>0.0.0.0 192.168.2.1 0.0.0.0 UG 9 0 0 wlp4s0
</span><span>0.0.0.0 192.168.20.1 0.0.0.0 UG 10 0 0 enp0s31f6
</span><span>192.168.2.0 0.0.0.0 255.255.255.0 U 9 0 0 wlp4s0
</span><span>192.168.20.0 0.0.0.0 255.255.255.0 U 10 0 0 enp0s31f6
</span><span>192.168.250.0 0.0.0.0 255.255.255.0 U 425 0 0 anbox0
</span></code></pre>
<p>This is a 80th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://manned.org/ifmetric.8">https://manned.org/ifmetric.8</a></li>
<li><a href="https://askubuntu.com/a/1211105/350681">https://askubuntu.com/a/1211105/350681</a></li>
<li><a href="https://bbs.archlinux.org/viewtopic.php?id=221046">https://bbs.archlinux.org/viewtopic.php?id=221046</a></li>
</ul>
M-Duino external voltage reference trap2021-05-28T00:00:00+00:002021-05-31T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/m-duino-external-voltage-reference-trap/<p>When wiring the Arduino based Programmable Logic Controller (PLC) from
<a href="https://www.industrialshields.com/">Industrial Shields</a>, I have become
stuck for a little bit due to trouble understanding how to enable the
outputs.</p>
<p>Currently using their
<a href="https://www.industrialshields.com/industrial-plc-based-on-arduino-original-boards-automation-solutions-202009">Ethernet range</a>
based on Arduino Mega 2560. Its internals seem to be quite quite good
quality, although it is hard to disassemble. It is even harder to assemble
it back so I just had a peek inside to asses the overall quality. Also the
teardown can already be seen in this
<a href="https://www.youtube.com/watch?v=GpiHQZAF4n0">YouTube video</a>, so no much
need replicating that.</p>
<p>This product line of Industrial Shields are based on standard Arduino
boards, which are themselves Open Source Software (OSS) and Open Source
Hardware (OSHW), but there are additional boards that are not OSHW. It is
understandable as they have to turn in profit somewhere.</p>
<h2 id="output-voltage-reference">Output voltage reference</h2>
<p>As can be seen in the data sheet, for example for
<a href="https://www.industrialshields.com/web/content?model=ir.attachment&field=datas&id=188732&">model AR38+</a>,
the outputs are Digitally Isolated Outputs. This is confirmed by the
Peter's video above. The data sheet further mentions that the voltage range
for the Digitally Isolated Outputs is 5 to 24 Vdc. This all implies that
the output voltage can be set.</p>
<p>But how to set the output voltage for the Digitally Isolated Output pins,
specifically Q0.0 to to Q0.4? PWM pins are affected too, by the way. Well
there is a pin labeled <code>Q/Vdc</code> and it's description states:</p>
<blockquote>
<p>Voltage Supply/Reference for Digital/PWM Outputs (isolated)</p>
</blockquote>
<p>The important bit is that the rest documentation is little bit lacking in
respect of output voltage. Although it is mentioned this pin exists, I
could not find any mention that is required to be connected, otherwise the
output voltage will be zero, even when the output LED is ON on the front
plate. So remember to connect the <code>Q/Vdc</code> pin to 5V, 12V or 24V on M-Duino.</p>
<h2 id="update-31-05-2021">Update 31.05.2021</h2>
<p>Do not forget to connect <code>COM(-)</code> pin to ground. It is galvanically
isolated too, so connect it either to a common ground or a separate one,
depending on the application. I have left it floating thinking that I only
need the high reference and it made the related <code>Q</code> pins rise above the 0V
when in <code>LOW</code> state, such that they were considered <code>HIGH</code> by the receiving
end. This is is something to keep in mind - properly connect both
references when using digital outputs.</p>
<p>This is a 79th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
One disadvantage of git based blog2021-05-27T00:00:00+00:002021-05-27T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/one-disadvantage-git-based-blog/<p>My statically generated blog (SSG) based on
<a href="https://sapper.svelte.dev/">Sapper</a> that is currently being phased out in
favor of <a href="https://kit.svelte.dev/">Svelte Kit</a> that unfortunately takes too
long to get into production phase is without any underlying database. This
is nothing special and fairly common for other SSGs, as data are loaded
directly from the Markdown files.</p>
<p>Some data, for instance post tags, are loaded from the special section at
the top of the Markdown file called <em>Front Matter</em>. This section is
generally written in some convenient language such as YAML or TOML. I have
already written some details about the
<a href="/blog/yaml-metadata-in-markdown/">YAML Front Matter in</a> and extended the
thoughts and findings in the
<a href="/blog/using-uuid-in-atom-feed/">post about the UUIDs</a>.</p>
<p>In that very post, I have also left links to my two other previous posts
explaining I am retrieving creation and modification dates by following the
git history. These types of data usually reside in the Front Matter too,
but it is very convenient to have them generated automatically. In that
post I also mention I have found someone else in the wild doing that too.</p>
<h2 id="zola-static-site-generator">Zola static site generator</h2>
<p>Recently I have stumbled upon <a href="https://www.getzola.org/">Zola</a> static site
generator that is modeled after <a href="https://gohugo.io/">Hugo</a>, but instead of
Go it is written in Rust. I am a fan of Rust and want to force myself to
learn some ting about it over time, so I have started exploring Zola. I
have found it matching my preferences with it's features quite well, which
was surprising.</p>
<p>What I have also found in
<a href="https://github.com/getzola/zola/issues/374#issuecomment-751385096">this issue</a>
that there already a <a href="https://github.com/Recmo/zola">Zola fork</a> that again
extracts the dates from the commit history, but the implementation was not
pulled upstream yet.</p>
<h2 id="should-the-dates-be-read-from-history">Should the dates be read from history?</h2>
<p>The advantages of such configuration are plain: it is automated, no need to
manually fix dates in the markdown file. Also, it forces myself to write on
the given day - no excuses. This is a plus, if someone is into a habit
building.</p>
<p>I have however found a pain point when trying to move the pages into Zola,
currently to see how it plays out. The problem is that <em>everything</em> now has
to be in a shared git history, for instance in a monorepo. Otherwise the
history is lost. It does not suffice to just copy the Markdown files and
tweak the Front Matter a little, because the date is completely missing
from it.</p>
<p>Now this disadvantage is obvious thinking back, but it occurred to me just
now. I am not planning to hassle with the fork at this stage, so I am
exporting the data into the markdown for now. If I continued down this
road, using Zola or something else instead of a Svelte based solution,
maybe I could automate the markdown date keeping with something else in the
future, for instance a <a href="/blog/arch-news-pacman-hook-tip/">beloved hooks</a>.</p>
<p>This is a 78th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
The fight of gitignores2021-05-26T00:00:00+00:002021-05-26T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/the-fight-of-gitignores/<p>Roaming around someone else's cloned repository with many different files
and fiddling some lines here and there, it is worth using good tools to
speed up the process.</p>
<p>Since it is a repository, naturally for me the <code>git</code> is the go to tool for
the version control. Finding files is efficient with the <code>fd</code> tool, one of
the available alternatives for GNU <code>find</code>. Where it all boils down however
is the act of searching through the contents of the files to learn about
their intricacies, especially when poking around codebase written in an
unfamiliar language or framework.</p>
<p>To quickly find and print files containing a string or a regular
expression, the first thing that comes to mind is GNU <code>grep</code>. However, it
too gained a lot of modern alternatives. I use <code>rg</code> for most of the file
content related searches.</p>
<h2 id="ignoring-certain-files">Ignoring certain files</h2>
<p>The modern alternatives to the standard GNU tools offer many advancements.
Apart from the potential speed boots, that might however be negligible in
many day-to-day cases, their feature set include ignoring certain files. I
have already written
<a href="/blog/smart-global-search-fzf-vim/">excluding ignored files</a>. This post
however looks at the topic from a very different angle.</p>
<p>I needed to exclude files from the folder <code>content/</code> from polluting the
search results. The easiest way would be to put the folder into
<code>.gitignore</code> and <code>rg</code> would pick that up, because it excludes the files
ignored by the version control by default. But I did not want to exclude
that folder from the version control.</p>
<p>So obviously I have put that folder into not into <code>.gitignore</code>, but into
<code>.rgignore</code>. The search stopped being polluted and I was happy for a while.
A few moments later, during committing, the <code>.rgignore</code> was showing up as
an <em>untracked</em> file. Untracked files can be accidentally committed into the
repository unless a great care is taken. This file had nothing to do with
the repository, rather it was aiding me in understanding the code, so I did
not want to commit it.</p>
<h2 id="just-one-more-layer">Just one more layer</h2>
<p>Thus, I have inserted the <code>.rgignore</code> into freshly created <code>.gitignore</code>.
The situation occurred funny to me, because now <code>.rgignore</code> was no longer
showing up as untracked, but <code>.gitignore</code> was. So I have basically just
added one more layer deferring an issue of not wanting to commit
unnecessary file, especially not accidentally.</p>
<p>To close the loop I have inserted <code>.gitignore</code> into itself alongside
<code>.fdignore</code>. Using <code>.gitignore</code> to ignore itself. It has probably happen to
other people too, but this was my first. But it solved the problems:</p>
<ol>
<li>Narrowing the <code>rg</code> search results just to actual code</li>
<li>Preventing the setting being committed accidentally</li>
<li>Hiding all the mess created</li>
</ol>
<p>Have you ever had to gitignore the <code>.gitignore</code> file itself?</p>
<p>This is a 77th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Install Nextcloud with Onlyoffice with docker-compose2021-05-25T00:00:00+00:002021-05-25T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/install-nextcloud-onlyoffice-docker-compose/<p>I was able to install NextCloud with OnlyOffice for collaborative
spreadsheets as they are quite useful when cooperating with other members
of a team, especially because Markdown support for tables is not very
interactive and also lacks the familiar WYSIWYG feeling people are used to
when using a proper spreadsheet interface.</p>
<ol>
<li>Follow steps in the <a href="https://github.com/ONLYOFFICE/onlyoffice-nextcloud">https://github.com/ONLYOFFICE/onlyoffice-nextcloud</a>
repository</li>
<li>Mount TLS certificates as described in my
<a href="/blog/certificate-not-found-docker-nginx/">previous post</a></li>
<li>Make sure the Nginx listens to both HTTP and HTTPS by further modifying
<a href="https://github.com/ONLYOFFICE/docker-onlyoffice-nextcloud/blob/6c133f45f7958437853e4bddc6712a33ab6c6537/nginx.conf#L48"><code>nginx.conf</code></a>:</li>
</ol>
<pre data-lang="diff" style="background-color:#2b303b;color:#c0c5ce;" class="language-diff "><code class="language-diff" data-lang="diff"><span>server {
</span><span> listen 80;
</span><span style="color:#a3be8c;">+ listen 443 ssl http2;
</span><span style="color:#a3be8c;">+ name_server example.com;
</span><span>
</span><span> # other directives
</span><span>
</span><span style="color:#a3be8c;">+ ssl_certificate ...;
</span><span>}
</span></code></pre>
<p>Without listening to both HTTP and HTTPS the OnlyOffice would complain:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>Download failed.
</span><span>
</span><span>Press "OK" to return to document list.
</span></code></pre>
<ol start="4">
<li>Run the script <code>set_configuration.sh</code> from the repository to setup
OnlyOffice</li>
</ol>
<p>I am sure this setup needs some more work as there is a load of
performance, security and privacy warnings under <strong>Settings</strong> >
<strong>Administration</strong> > <strong>Overview</strong>, namely SQLite and <code>X-Frame-Options</code> HTTP
header, but there has to be a beginning somewhere.</p>
<p>This is a 76th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://www.linuxbabe.com/docker/onlyoffice-nextcloud-integration-docker">https://www.linuxbabe.com/docker/onlyoffice-nextcloud-integration-docker</a></li>
<li><a href="http://nginx.org/en/docs/http/configuring_https_servers.html#single_http_https_server">http://nginx.org/en/docs/http/configuring_https_servers.html#single_http_https_server</a></li>
</ul>
Certificate not found with Nginx under Docker2021-05-24T00:00:00+00:002021-05-24T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/certificate-not-found-docker-nginx/<p>When trying to run a <code>docker-compose</code> with a Nginx inside and setting up
TLS for a virtual host (subdomain), this error showed up:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>nginx: [emerg] cannot load certificate "/etc/nginx/ssl/sub.peterbabic.dev.cer":
</span><span>BIO_new_file() failed (SSL: error:02001002:system library:fopen:
</span><span>No such file or directory:fopen('/etc/nginx/ssl/sub.peterbabic.dev.cer','r')
</span><span>error:2006D080:BIO routines:BIO_new_file:no such file)
</span></code></pre>
<p>I tried to do all kinds of permissions and group changes believing that the
docker user has insufficient privileges to access the files because they
were very clearly present in the filesystem.</p>
<p>It took me far longer to resolve than I would like to admit. After many
many search results that did not get off a clue about the problem, I have
found this
<a href="https://www.digitalocean.com/community/questions/docker-nginx-with-certbot-certificate-file-not-found?answer=47759">comment</a>
that finally nudged me in the right direction.</p>
<h2 id="docker-compose-volumes">Docker compose volumes</h2>
<p>The problem was I was instructing Nginx to access certificates generated by
<a href="https://github.com/acmesh-official/acme.sh">acme.sh</a> that resided on the
host, but they were not mounted by the container, so it could not access
them. What a rookie mistake. The solution is on the last line in this
excerpt of <code>docker-compose.yml</code> file:</p>
<pre data-lang="yaml" style="background-color:#2b303b;color:#c0c5ce;" class="language-yaml "><code class="language-yaml" data-lang="yaml"><span style="color:#bf616a;">version</span><span>: "</span><span style="color:#a3be8c;">3</span><span>"
</span><span style="color:#bf616a;">services</span><span>:
</span><span> </span><span style="color:#bf616a;">nginx</span><span>:
</span><span> </span><span style="color:#bf616a;">container_name</span><span>: </span><span style="color:#a3be8c;">nginx-server
</span><span> </span><span style="color:#bf616a;">image</span><span>: </span><span style="color:#a3be8c;">nginx
</span><span> </span><span style="color:#bf616a;">restart</span><span>: </span><span style="color:#a3be8c;">always
</span><span> </span><span style="color:#bf616a;">ports</span><span>:
</span><span> - </span><span style="color:#a3be8c;">80:80
</span><span> - </span><span style="color:#a3be8c;">443:443
</span><span> </span><span style="color:#bf616a;">volumes</span><span>:
</span><span> - </span><span style="color:#a3be8c;">./nginx.conf:/etc/nginx/nginx.conf
</span><span> - </span><span style="color:#a3be8c;">/etc/nginx/ssl:/etc/nginx/ssl
</span></code></pre>
<p>The last line mounts the <code>/etc/nginx/ssl/</code> folder from the host inside the
container, so Nginx there can access the certificates and enable TLS. For
the completeness, an excerpt from the referenced <code>nginx.conf</code> could look
like this:</p>
<pre data-lang="conf" style="background-color:#2b303b;color:#c0c5ce;" class="language-conf "><code class="language-conf" data-lang="conf"><span style="color:#b48ead;">server </span><span>{
</span><span> </span><span style="color:#bf616a;">listen </span><span style="color:#d08770;">443</span><span> ssl http2;
</span><span> </span><span style="color:#bf616a;">server_name </span><span>sub.peterbabic.dev;
</span><span>
</span><span> </span><span style="color:#bf616a;">ssl_certificate </span><span>/etc/nginx/ssl/sub.peterbabic.dev.cer;
</span><span> </span><span style="color:#bf616a;">ssl_certificate_key </span><span>/etc/nginx/ssl/sub.peterbabic.dev.key;
</span><span> </span><span style="color:#bf616a;">ssl_protocols </span><span>TLSv1.</span><span style="color:#d08770;">1</span><span> TLSv1.</span><span style="color:#d08770;">2</span><span>;
</span><span> </span><span style="color:#bf616a;">ssl_ciphers </span><span>HIGH:!aNULL:!MD5;
</span><span>}
</span></code></pre>
<p>This is a 75th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Understanding single Drone per VPS limitation2021-05-23T00:00:00+00:002021-05-23T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/understanding-single-drone-per-vps-limitation/<p>Being occupied by many other higher priority tasks lately, my goal to set
up <a href="https://www.drone.io/">Drone</a> instance on the Contabo VPS was put on
hold. I wanted to set-up Drone to create a pipeline for building the posts
for this blog. The blog is statically generated site, meaning posts need to
be generated on some machine, before they can be published as an actual
blog.</p>
<p>Currently, I am building this blog on the laptop, but having the build and
publish process set up on the server would enable me to write from other
devices as well, because the build would be performed on the server, not
the laptop itself. The laptop would be used only for the writing part.</p>
<p>The theory would be that Drone would track the blog's repository main
branch and if any commit appears there, it would build and publish. Most of
my work is hosted at my Gitea server, the blog included. Gitea already has
a markdown editor integrated, so if I can login into Gitea, I can publish a
blog post.</p>
<h2 id="gitea-and-drone-integration">Gitea and Drone integration</h2>
<p>Drone is capable of working with Gitea and the set up for such integration
is also available in the official Drone
<a href="https://docs.drone.io/server/provider/gitea/">docs</a>. However, there is a
scary part for the Drone version 1.0 at the beginning:</p>
<blockquote>
<p>Please note we strongly recommend installing Drone on a dedicated
instance. We do not recommend installing Drone and Gitea on the same
machine due to network complications, and we definitely do not recommend
installing Drone and Gitea on the same machine using docker-compose.</p>
</blockquote>
<p>I have been trying to understand what does it mean for some time. The
response I received at the Gitter Drone
<a href="https://gitter.im/drone/drone?at=6060e48688edaa1eb8e8ba26">channel</a> was
that it is possible to have Gitea and Drone on the same VPS but it is
complicated and no official documentation is offered.</p>
<p>There are a few guides available that offer some guidance, but I was not
following either of them, so no links here. The only link I would like to
discuss regarding the topic is
<a href="https://discourse.drone.io/t/drone-and-gitea-behind-nginx-reverse-proxy-with-docker-compose-yml/6777">this one</a>.</p>
<h2 id="drone-behind-nginx">Drone behind Nginx</h2>
<p>The reason I am referring to that link is twofold. First, it is the latest
link I could find on the topic and also relatively well written, but I did
not test it yet. But I plan to as I could understand the steps outlined
there. The second reason is however more important. The actual problem with
installing Drone on the host with Gitea lies in the fact that Drone should
not be installed behind Nginx at all!</p>
<p>There is a <a href="https://0-8-0.docs.drone.io/setup-with-nginx/">page</a> in old
docs regarding Drone version 0.8 instructing about Nginx configuration. But
there is no such page mentioning Nginx in the recent docs! I suspect other
reverse-proxy tools are omitted as well.</p>
<p>The official documentation for running Drone sever container is the
following:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>docker run \
</span><span> --volume=/var/lib/drone:/data \
</span><span> --env=DRONE_GITEA_SERVER={{DRONE_GITEA_SERVER}} \
</span><span> --env=DRONE_GITEA_CLIENT_ID={{DRONE_GITEA_CLIENT_ID}} \
</span><span> --env=DRONE_GITEA_CLIENT_SECRET={{DRONE_GITEA_CLIENT_SECRET}} \
</span><span> --env=DRONE_RPC_SECRET={{DRONE_RPC_SECRET}} \
</span><span> --env=DRONE_SERVER_HOST={{DRONE_SERVER_HOST}} \
</span><span> --env=DRONE_SERVER_PROTO={{DRONE_SERVER_PROTO}} \
</span><span> --publish=80:80 \
</span><span> --publish=443:443 \
</span><span> --restart=always \
</span><span> --detach=true \
</span><span> --name=drone \
</span><span> drone/drone:1
</span></code></pre>
<p>Note the <code>--publish</code> option, specifying precisely the 80 and 443 ports. How
to setup TLS without the reverse proxy like Nginx? Well Drone has
<a href="https://docs.drone.io/server/https/">https</a> functionality built in.</p>
<p>The easiest way is to add <code>--env=DRONE_TLS_AUTOCERT=true</code> to the above
command and call it done. The Drone starts. Of course the certs can be
specified manually, everything is in the docs. But the point is that it is
not problematic to setup Drone with Gitea on the same server, because
problems starts a step before, at the missing reverse proxy documentation,
which is needed for setting up <em>anything</em> on the server alongside Drone.</p>
<h2 id="closing-words">Closing words</h2>
<p>I am not blaming anyone here, it is just a pity that Drone basically
requires it's own VPS. The point of self-hosting is running multiple
services on the VPS. Especially when the service is meant to be run once a
day for a few seconds till it builds the static blog. Since Drone would not
be using resources continuously and its startup time delay would be
insignificant (it does not matter if the blog is published 5 minutes later
or sooner), it would be much better suited to some serverless environment,
but I did not get there yet. For now, I am passing on Drone until I find
better place where it can run.</p>
<p>This is a 74th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Make Auto-type work in kitty under Wayland2021-05-22T00:00:00+00:002021-05-22T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/make-auto-type-work-kitty-wayland/<p>Being a big fan of KeePassXC project I am making use of virtually all the
features it provides. One of the features I use really often is the global
Auto-type feature. I use it for providing passwords into ssh sessions and
into ansible when becoming a super user. The correct password is usually
detected and promptly written, with just a single key combination.</p>
<p>The problem with Auto-type feature is that it still does
<a href="https://github.com/keepassxreboot/keepassxc/issues/2281">not work on pure Wayland</a>.
I have noticed some time ago with Gnome Terminal. I have also tried XFCE4
Terminal but that was also already migrated to Wayland, meaning the
Auto-type was not working there either.</p>
<p>I have thus started using <a href="https://github.com/kovidgoyal/kitty">kitty</a> for
my main terminal, as it is super hackable, but I still did not have time to
learn most of it's features. It was still using XWayland, meaning the
Auto-type worked there and I could focus on other things for the time
being.</p>
<p>Kitty started using Wayland by default somewhere around version 0.20.0, and
that again messed up with my Auto-type reliant workflow. I kept downgrading
to version 0.19.3, but I did not wanted to do that for a prolonged time.
Today, after another system upgrade, the Auto-type stopped working again I
had a choice to make. Either I instructed package manager to ignore
upgrades for kitty or I find some other solution.</p>
<h2 id="forcing-kitty-to-use-xwayland-backend">Forcing kitty to use XWayland backend</h2>
<p>After some searching I have found that other people try to
<a href="https://github.com/kovidgoyal/kitty/issues/2648">force the x11 mode on kitty</a>
and it is possible to configure it by multiple ways. The problem only was
that I had no idea what XWayland is. I knew there is the trusty old X11
compositor and that there is the newer Wayland compositor. XWayland is an X
server running under Wayland to make the migration smoother. Nice, but
until there is a support for KeePassXC Auto-type on pure Wayland, I have to
run my terminal emulator in X11 or XWayland mode.</p>
<p>For kitty, there are these options:</p>
<ul>
<li>Use the environmental variable:</li>
</ul>
<p><code>KITTY_DISABLE_WAYLAND=1</code></p>
<p>This obviously does not work for the first window when exported in
<code>.bashrc</code> or <code>.zshrc</code>, as the variable is set after the first terminal is
run.</p>
<ul>
<li>Use the
<a href="https://man.archlinux.org/man/community/kitty/kitty.1.en#OPTIONS">override option</a>:</li>
</ul>
<p><code>kitty -o linux_display_server=x11</code></p>
<p>This is better than the environmental variable as the launcher or alias can
be set to this, but requires some extra work.</p>
<ul>
<li>Configure the setting in <code>~/.config/kitty/kitty.conf</code>:</li>
</ul>
<p><code>linux_display_server x11</code></p>
<p>This is the best option. As of kitty version 0.20.3, the setting has a
default value of <code>auto</code>, detecting Wayland on GNOME and running in Wayland.
Setting it to <code>x11</code> runs kitty under XWayland and Auto-type is working
properly there, no need for downgrading.</p>
<p>This is a 73th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
First real data from the bee weighter project2021-05-21T00:00:00+00:002021-05-21T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/first-real-data-bee-weighter-project/<p>I've got hold to some real data from the
<a href="https://github.com/peterbabic/bee-weighter">bee-weighter</a> project. The
transformed data can be seen in the following table:</p>
<table><thead><tr><th>date time</th><th>kg</th></tr></thead><tbody>
<tr><td>17.5.21 01:46</td><td>39,82</td></tr>
<tr><td>17.5.21 07:46</td><td>41,00</td></tr>
<tr><td>17.5.21 13:46</td><td>39,97</td></tr>
<tr><td>17.5.21 19:46</td><td>39,44</td></tr>
<tr><td>18.5.21 01:46</td><td>39,43</td></tr>
<tr><td>18.5.21 07:46</td><td>39,67</td></tr>
<tr><td>18.5.21 13:46</td><td>37,03</td></tr>
</tbody></table>
<p>This is all I have received, as other measurements give off zeros. Some
cable to the weight probably got torn off. It's a pity.</p>
<p>There can be one conclusion drawn from the data: the weight is not
increasing, therefore the bees are not gathering enough resources
themselves at this point, meaning more sugar water should be added. This
technique is the equivalent of feeding the bees.</p>
<p>This is a 72th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
On not writing project requirements down2021-05-20T00:00:00+00:002021-05-20T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/on-not-writing-project-requirements-down/<p>There was a project the customer provided only very vague requirements for.
They were also prone for an unexpected changes and the customer was
absolutely reluctant to provide a write-up of them after multiple requests.</p>
<p>I wanted to ditch the project entirely, but the client persisted that the
things would change for the better soon. The things were not changing for
some time and I was unable to proceed with the actual work. When the client
orders something I know is not a good idea, it is my moral duty to inform
him and ask for the second confirmation, to prevent meaningless work, even
if paid. I have explained the client the situation without written
requirements is comparable to the following scenario:</p>
<blockquote>
<p>Imagine you ask the architect to make plans for the house, with a twist
that you tell the architect nothing about what the house should look
like. This means you essentially ask the architect to make up the
requirements. When the house is later finished, the following situation
can arise: you look at the house and tell architect you did not want the
pool to be in the basement, you wanted the pool to be on the roof, so you
are refusing to pay.</p>
</blockquote>
<p>This would be a really unfortunate scenario as both sides involved would be
severely disappointed by the end result. This is the reason the
requirements have to be very specific. The more specific in fact, the
better. It is however a subtle skill to ask people for a very specific
results to not be disappointed later. A skill not many people are good at
or even aware of.</p>
<p>When you go dine outside and you do like your French onion soup with the
slice of bread covered with butter on both sides or whatever specific
requirements regarding the food you might have, you have to ask for it
before the meal is prepared. The same applies for programming and
development. The problem with the latter is however, that it is far more
abstract. With food, one can even see photos or even with the naked eye
what the result looks like. The taste can be judged from the visuals to
some degree. With programming, apart from GUI, judging the properties
beforehand is harder.</p>
<p>Staying with the client has paid off. The problem was not so much in the
lack of abstract thinking. It turned out the client was simply overwhelmed
with other tasks and really could not find anyone to delegate the task of
writing the requirements to. He decided to hire me for the task to gather
and write the requirements down for the project, so I can finally start
working on it. Now the both sides are satisfied. I would be interested how
this situation is handled elsewhere, because I was not used to not getting
any kind of written requirements for the project before.</p>
<p>This is a 71th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Wiring is an art too2021-05-19T00:00:00+00:002021-05-19T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/wiring-is-an-art-too/<p>I've spent last two days wiring a machine, so no programming post today. I
do not do this work too often, but when I do, it is a relief. It is a form
of art. It is not the same kind of art as programming is, though. With the
wiring, the results are far more palpable and far less abstract. Once
wired, photos can be taken and shared straight away, the same way paintings
do. It is however far harder to capture the beauty of the masterfully
crafted program in a photo. It is more like impossible, because photo can
capture the beauty of the user interface design, which is another form of
art. Enjoy art in whatever form you see fit.</p>
<p>This is a 70th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Using long commit message description2021-05-18T00:00:00+00:002021-05-18T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/using-long-commit-message-description/<p>I have stumbled upon the
<a href="https://cathode.church/@meena/106249440441971234">short post</a> that
contained the following:</p>
<blockquote>
<p>~200 lines of commit message for +5/-8 change @ #FreeBSD:
https://freshbsd.org/freebsd/src/commit/9a2fac6ba65fbd14d37ccedbc2aec27a190128ea</p>
</blockquote>
<p>This obviously made me think. Is such a long description necessary? And if
it is, is the commit message description the right place to put it in?</p>
<h2 id="is-a-long-commit-message-description-necessary">Is a long commit message description necessary?</h2>
<p>I could rationalize the answer to the first commit description rather
easily. Yes. Yes, it is necessary to document changes in anything. No
excuses. If the description of the change is so complex, be it so. Kudos to
anyone who makes great work.</p>
<h2 id="is-the-commit-message-description-the-right-place">Is the commit message description the right place?</h2>
<p>This question was more tricky. The reason is that there are multiple places
where the detailed description could be placed. Apart from the commit
message description itself, the changes could be documented in the code
comments or a Pull Request (or a Merge Request, or equivalent). Let's break
it down a little bit.</p>
<p>Putting change description in the code comments was a practice before
version control systems with the possibility of a adding a change message
was a commonplace. The times are far gone. It is still worth to put details
in the comments, but they should not interact or rely on other comments.
With a description such long as this one, spanning multiple lines, this
probably is not a good idea.</p>
<p>On the other hand, putting the details in the Pull Request could also be
worth, but there it could be changed quite easily and important parts of
the description could be lost, especially because code hosting platforms
that offer the Pull Request or a Merge Request functionality do not offer
it as a standardized feature, meaning it could be there today but disappear
tomorrow.</p>
<h2 id="linking-between-the-descriptions">Linking between the descriptions</h2>
<p>There is one other aspect of this all. It is possible to insert a link to
either the commit message description, into the Pull Request or even into
comments. Which one should be the canonical source?</p>
<p>Thinking about it, the commit message description it the best place,
especially if backed by the code hosting platform, that generates
persistent links to the commits and their descriptions. Creating a detailed
commit message description and linking to it everywhere else is the way
that the description with important bit and reasonings is not easily lost,
as changing a commit history is a very discouraged practice.</p>
<p>This practice is similar to POSSE (Publish (on your) Own Site, Syndicate
Elsewhere), although in the content creation context. But the code and it's
messages are becoming the content too, so it probably makes sense to start
treating it as such. Put the canonical commit change notes into the commit
message description and link to it (syndicate) from elsewhere.</p>
<p>This is a 69th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Rotating QR codes in Zebra ZPL2021-05-17T00:00:00+00:002021-05-17T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/rotating-qr-codes-in-zebra-zpl/<p>I have discovered a strange behavior when instructing Zebra GK420t printer
to make stickers containing some text and a QR code. Consider the following
example:</p>
<p><img src="https://peterbabic.dev/blog/rotating-qr-codes-in-zebra-zpl/zpl-qr-code-rotated.png" alt="A QR code without a rotation on the left and the QR code rotated by 180 degrees on the right" /></p>
<p>Both QR codes are perfectly fine. They both can be printed with moth
printers, including the trusty GK420t and they both can be decoded by the
camera on the phone or the bar code scanner.</p>
<p>The difference is that the QR code on the right is rotated by 180 degrees.
Rotation of the QR code should not be a problem, as the decoder can adjust
for the orientation using position detection patterns. This is how a QR
code is designed, to convey the same information no matter which angle it
is viewed from.</p>
<h2 id="zpl-instructions-for-a-qr">ZPL instructions for a QR</h2>
<p>However, using Zebra Designer 3 Essentials, the code generated for the left
one looks like this:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>^FT100,100
</span><span>^BQN,2,6
</span><span>^FH\^FDLA,https://peterbabic.dev^FS
</span></code></pre>
<p>While the one on the right creates a completely different output:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>^FO100,100
</span><span>^GFA,03840,03840,00020,:Z64:
</span><span>eJztljEWgzAMQ82UY+SmkNw0x8gU11JoC+916BJNeODlfRZjyQpmTz31u7Zuu+Xa7fC29exRXcQ2HNoRuOeCTjQsj/js5HhGU+5Vw7zFcVh2vFQyr5yBpWZCNq2FjlKoPWevYAAdXuPz67XV7OwFXWR+/We9VrPM2c8WKHtXMHg8heyx0Ib1evt+OYttdg8cM4iGpuYCBmunNgNspEYdFIwuj/FzszGMaEfEdjtjrPIsYt52Tr2eh0Y9ljNECIMkNC8zxjTMYO0ZYIM3xqFiKLpst1stZbyQQ+1cL3mqYPR4YXoN+9wVAobbuPIHBO2cuaZijDE9K+FxGE3JzN7/mHbTYzGj14rNxbr6by2zp/6uF7Naq7A=:26A4
</span></code></pre>
<p>The codes are almost identical, why such a difference in the code?</p>
<h2 id="printing-graphic-fields">Printing graphic fields</h2>
<p>The problem is in the features the actual printer supports. This particular
printer supports ZPL instruction <code>^BQ</code> for the QR code bar code. The
instruction however accepts no input that describes it's rotation.</p>
<p>Because of no way to tell the printer how should the QR code be rotated,
the software has to first encode the rotated code into the <code>^GF</code>
instruction for the graphic field. This way, the printer can still print
it.</p>
<h2 id="problems-with-variable-data">Problems with variable data</h2>
<p>In order to encode variable data in the QR code, which is most of the time
the stickers are printed in the industrial environment, encoding details
such as product model, time and date, and maybe even an operator ID, we
have to be able to modify it's instruction on the fly.</p>
<p>Modifying the data is simple in the case of <code>^BQ</code>, as the data are stored
in the plain text. It however becomes increasingly difficult to do the same
with the rotated QR code, as the data there have to be first converted to
the graphical representation of the QR code, then rotated and then encoded
in what is presumably a Base64 encoding.</p>
<p>Such would be a routine, which I could not find a straightforward
documentation for. Even if it was readily available, it would be required
to port it to the local environment where the printer is connected to, for
instance the PLC.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Most of the modern printers use Ethernet connection, which is superior for
more complicated designs, but GK420t does not. It uses a serial connection,
implying a rather low-level controller. Unless absolutely unavoidable,
printing rotated QR codes with variable data should be avoided with
printers of this family.</p>
<p>This is a 68th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://www.zebra.com/us/en/support-downloads/knowledge-articles/zpl-command-information-and-details.html">https://www.zebra.com/us/en/support-downloads/knowledge-articles/zpl-command-information-and-details.html</a></li>
<li><a href="https://support.zebra.com/cpws/docs/zpl/BQ_Command.pdf">https://support.zebra.com/cpws/docs/zpl/BQ_Command.pdf</a></li>
<li><a href="https://supportcommunity.zebra.com/s/article/GF-graphic-field-ZPL-command?language=en_US">https://supportcommunity.zebra.com/s/article/GF-graphic-field-ZPL-command?language=en_US</a></li>
</ul>
Thoughts on the bee weighter project2021-05-16T00:00:00+00:002021-05-16T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/thoughts-bee-weighter-project/<p>I was able to put the
<a href="https://github.com/peterbabic/bee-weighter">bee weighter</a> project to some
tests. Some thoughts and insights from the process.</p>
<p>Three rechargeable AA batteries gave out 4V under the minuscule load the
electronics represent, even including the load from the internal voltage
regulator. I was expecting them to give 3 x 1.2V = 3.6V, which would be
perfectly in the range of most used components. Currently still using the
internal regulator, which is wasting power until I figure more details out.</p>
<p>Storing weight in dekagrams proved useful as it is using the full
resolution the HX711 is offering without the need to use the decimal point,
which is
<a href="https://www.reddit.com/r/arduino/comments/2fum7c/sprintf_outputs_a_question_mark_when_i_try_to/">a little bit problematic</a>.</p>
<p>I have used a logging interval of 6 hours which gives me memory storage for
64 days until the circular buffer overruns and I start loosing entries.</p>
<p>The tests right now in the backyard should show how long would batteries
last with the regulator still in place. Hopefully I can make another
version with the proper measurements soon since I have invested so much
time in putting the microcontroller to sleep to conserve power.</p>
<p>I wonder how big the market for this could be. With the alarm rate bees are
disappearing I am not sure. There is also a question about how long would
it take for a non-industrial honey keeper till his investment into such
product would pay off. But for the industrial ones, Grafana is sometimes
marketing itself as a sutiable way to monitor a beehive. I would love to
see the two integrated, and there are many open-source projects available
already.</p>
<p>This is a 67th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Unexpected naming conventions2021-05-15T00:00:00+00:002021-05-15T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/unexpected-naming-nconventions/<p>In IEC 61131-3 compatible programming environments, string manipulation
have many inbuilt functions. Unfortunately, sometimes the routines are a
little bit hard to find. Even though it is possible to do a search through
available functions, just searching by name sometimes won't cut it.</p>
<p>Searching around for a substring function with the terms <code>string</code>, <code>substr</code>
or just <code>sub</code> returned no relevant results. Fortunately, we live in the
world where the Internet is almost ever present and it is possible to reach
for the knowledge stored there, provided, someone else had already shared
it before.</p>
<p>To get the leftmost N characters, the function <code>LEFT</code> is used with the
signature:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>STRING LEFT(STRING STR, INT SIZE)
</span></code></pre>
<p>Subsequently, to get the rightmost N characters of the string, to for
instance reduce four character year representation to the two characters,
function <code>RIGHT</code> can be used:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>STRING RIGHT(STRING STR, INT SIZE)
</span></code></pre>
<p>To get the actual substring, it is possible to daisy-chain these two
functions, cutting the string from the both sides. There is a however an
even shorter way, called <code>MID</code>, meaning the function is returning the
middle part of the string, or the substring:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>STRING MID(STRING STR, INT LEN, INT POS)
</span></code></pre>
<p>I do not know who was creating these names, but in my mind, a middle is a
point on the line having the same distance from both line's ends, which
clearly is not the case for the <code>MID</code> function, as it starts at the given
POSition an takes specified LENght of characters. Hopefully I will finally
remember this non-intuitive set of functions now that I shared it.</p>
<p>This is a 66th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
How many bytes does time and weight need?2021-05-14T00:00:00+00:002021-05-14T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/how-many-bytes-time-and-weight-needs/<p>In my <a href="/blog/dividing-at24c32-eeprom-memory-space/">previous article</a> I
have explained how to divide the memory into equal parts, so called blocks.
I have concluded that to utilize the entire memory, I could divide it into
256 blocks of 16 bytes or maybe even 128 blocks for 32 bytes. Having full
32 bytes seem like the overkill and would probably lead to wasting precious
space.</p>
<p>Also, many I2C (Wire) libraries have a fixed limit of 30 bytes per write. I
Am currently using and <a href="https://github.com/Makuna/Rtc">makuna Rtc</a> library
and it states this in the
<a href="https://github.com/Makuna/Rtc/blob/8d9cf374929f9bfde9fa862092d97110ccca9fe4/src/EepromAT24C32.h#L47-L48">source comment</a>.
This means, that it is not possible to write full 32 bytes in one go. It is
however possible to split it to two writes without a problem. This is not a
technical limitation, it just adds more lines into the code. For this
reason, I will try to fit the data into 16 bytes, which is still a lot.</p>
<p>Having 16 bytes per block allows me to store least 256 logged entries,
meaning either 256 entries every 24 hours or 128 entries twice a day. The
initial format for the stored data I have come up looks like this:</p>
<p><code>1620967937+12345</code></p>
<p>The data are the following form:</p>
<p><code>[unix epoch][flipping separator][wight in dekagrams]</code></p>
<p>This form fits into the 16 bytes. The flipping separator flips its sign
every time the entry is overwritten. This is essential, otherwise we would
not know which data are the most recent in the circular buffer data
structure. I have discussed this topic a
<a href="/blog/notes-on-circular-queue-data-structure/">few days ago</a>. In reality
the data (truncated to just 6 blocks of 16 bytes for illustration) could
look like this:</p>
<pre data-lang="c" style="background-color:#2b303b;color:#c0c5ce;" class="language-c "><code class="language-c" data-lang="c"><span style="color:#d08770;">1620967977</span><span>+</span><span style="color:#d08770;">10080
</span><span style="color:#d08770;">1620967987</span><span>+</span><span style="color:#d08770;">10090 </span><span style="color:#65737e;">// the most recent entry
</span><span style="color:#d08770;">1620967937</span><span>-</span><span style="color:#d08770;">10040 </span><span style="color:#65737e;">// the least recent entry
</span><span style="color:#d08770;">1620967947</span><span>-</span><span style="color:#d08770;">10050
</span><span style="color:#d08770;">1620967957</span><span>-</span><span style="color:#d08770;">10060
</span><span style="color:#d08770;">1620967967</span><span>-</span><span style="color:#d08770;">10070
</span></code></pre>
<p>The data show the entry is created every 10 seconds and the weight is
gradually increasing by 10 dekagrams (100 grams). This could work without
much problems, but could it be done better?</p>
<h2 id="possible-optimizations">Possible optimizations</h2>
<ol>
<li>The library uses some strange epoch time,
<a href="https://github.com/Makuna/Rtc/blob/8d9cf374929f9bfde9fa862092d97110ccca9fe4/src/RtcDateTime.h#L23">starting at 1st of January 2000</a>.
This shaves of a byte, because such epoch time is currently around
<code>674252045</code>. The library provides tools to manipulate such epoch, so
sticking with it for now.</li>
<li>Since I am storing an epoch time already, I could find the division
between the least and the most recent entry by comparing these times,
this could shave off the flipping separator byte. I would not need any
separator as all the data are fixed length, separating them simply by an
offset.</li>
<li>If the data are stored at the fixed time intervals, the epoch would not
be needed at all, since it could be calculated backwards in time. Now it
would require just the flipping separator and a weight, as there needs
to be at least one way to determine where the head of the circular
buffer currently it.</li>
<li><strong>The most importantly</strong>, since I am just trying to store integers, they
would take much less space than storing them as individual characters.
For instance, the epoch would fit into a mere 30 bits and even the full
Unix epoch currently sits at 31 bits, which are just 4 bytes, down from
the 9 or 10 bytes when stored as characters! Similarly, weight could be
stored in just two bytes (a maximum of 65535 dekagrams, which is three
times over it's range anyway), giving me the required block size of just
4 + 2 bytes, providing a whopping 682 log entries!</li>
</ol>
<h2 id="next-steps">Next steps</h2>
<p>With 682 possible entries I could decide to <strong>either log entries more
often</strong>, possibly reducing the significance of occasional measurement
errors or <strong>choose to log additional data, like temperature</strong>. DS3132 has
an internal thermometer to calibrate it's oscillator frequency and it's
value is available to read. This means I could choose to What's more, it
can even be
<a href="https://thecavepearlproject.org/2018/02/03/measuring-temperature-with-two-clocks/">very precise, even down to 0.002°C</a>.
Quite interesting for a beehive, as this project is measuring the weight of
potential honey remotely.</p>
<p>The library however provides the way to store an array of characters or
<a href="https://github.com/Makuna/Rtc/blob/8d9cf374929f9bfde9fa862092d97110ccca9fe4/src/EepromAT24C32.h#L49">C strings by default</a>,
not integers at specified offset. I have to figure out how to use integers
natively wit this library easily if I want to continue being lazy, but it
should not be hard.</p>
<p>This is a 65th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Dividing the AT24C32 EEPROM space2021-05-13T00:00:00+00:002021-05-13T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/dividing-at24c32-eeprom-memory-space/<p>The AT24C32 EEPROM that comes with the DS3132 RTC module has 32kbit of
space, meaning it can store 4096 bytes or 4096 characters when using an
8-bit character encoding like ASCII.</p>
<p>To save some useful data on it, we need to divide it into equal blocks that
would contain the data. If we want to utilize the entire available space,
we should focus on the divisors of the number 4096:</p>
<table><thead><tr><th>first factor</th><th>second factor</th></tr></thead><tbody>
<tr><td>1</td><td>4096</td></tr>
<tr><td>2</td><td>2048</td></tr>
<tr><td>4</td><td>1024</td></tr>
<tr><td>8</td><td>512</td></tr>
<tr><td>16</td><td>256</td></tr>
<tr><td>32</td><td>128</td></tr>
<tr><td>64</td><td>64</td></tr>
</tbody></table>
<p>These are all familiar numbers, the powers of 2 showing factors (whole
number divisors) of the number 4096. We can see for instance, that we could
divide the memory into 1 block of 4096 bytes or 2058 blocks each 2 bytes
long. But these are not very useful decisions. To get to some useful number
we have to optimize either for the block size or the number of blocks.</p>
<p>If we choose our block size is 32 bytes, so we can store quite a lot of
data at once, we are limited to the 128 blocks as we can see in the table
above:</p>
<p><code>4096 bytes / 32 bytes per block = 128 blocks</code></p>
<p>This would mean that out data-logger would be able to store last 128
entries. If we decide we need to support at least 256 different entries
(blocks) of data, we have to fit the all the entry data into the 16 bytes:</p>
<p><code>4096 bytes / 256 blocks = 16 bytes per block</code></p>
<p>Sixteen bytes per block, not great, not terrible. It all depends on the
data logged. If all the required information fit there, great. If not,
terrible. We could also choose some arbitrary number like 21 bytes per
block:</p>
<p><code>4096 bytes / 21 bytes per block = 195.047619048 bytes per block</code></p>
<p>Of course, we are not going to deal with fractions of byte here. The above
rather means that we have 195 full blocks available and some bytes remain
unclaimed somewhere, usually at the beginning or at the end of memory
space. In this particular scenario, it is actually just a single unused
byte:</p>
<p><code>4096 bytes MODULO 21 bytes per block = 1 unclaimed byte</code></p>
<p>Or in other words:</p>
<p><code>195 blocks * 21 bytes per block = 4095 claimed bytes</code></p>
<p>This single byte can be used for some other purpose, like a some
configuration flag. Just make sure it is changing less often than the
actual blocks of data are logged, otherwise you wear that byte out sooner!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://www.elecrow.com/rtc-eeprom-module-ds3231-at24c32-p-863.html">https://www.elecrow.com/rtc-eeprom-module-ds3231-at24c32-p-863.html</a></li>
<li><a href="https://en.wikipedia.org/wiki/Block_(data_storage)">https://en.wikipedia.org/wiki/Block_(data_storage)</a></li>
</ul>
<p>This is a 64th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Notes on circular queue data structure2021-05-12T00:00:00+00:002021-05-12T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/notes-on-circular-queue-data-structure/<p>While researching the method to store the data into the electrically
erasable programmable read-only memory (EEPROM) I have found out the
problem is not as straightforward as I have previously thought. Consider
the following requirements:</p>
<ol>
<li>Store the last N measured variables in order</li>
<li>Preserve the data after the prower is removed</li>
</ol>
<p>The requirements seems clear on the surface, but they have some unintended
consequences to them. Consider the first requirement - storing a given
number of most recent values. Either the stack or the queue would serve, if
their capacity was sufficient, meaning the data would be taken out sooner
than their capacity would be reached. With the sufficiently large queue, we
could took all the data out and discard all but last N values. With the
stack it would be even easier - just pop out the number of required
entries.</p>
<h2 id="what-about-limited-storage">What about limited storage?</h2>
<p>The problem happens when the storage capacity is expected to fill up before
the data are taken out. Such limitation forces us to start discarding data
once that happens. When the either queue or the stack is full, no new data
can be pushed into the structure, failing our first requirement.</p>
<p>What is worse, once the number of stored entries matches the capacity of
the storage, meaning that the queue or the stack is full, we would have no
way of telling how recent the data are, or in other words, how many entries
were discarded.</p>
<h2 id="enter-circular-queue">Enter circular queue</h2>
<p>To overcome the problem of discarding the most recent data, a different
kind of data structure has to be used, a one called circular queue
(sometimes called also circular FIFO or a rotating buffer). A circular
queue has the property of discarding not the most recent data, but the
least recent data instead, providing us with the exact solution meeting our
first requirement. Note that there appears to be no such data structure
called a circular stack, or at least not a standardized one.</p>
<p>To meet the second requirement, we would have to use a non-volatile memory.
A non-volatile memory stores its contents after the power is removed. The
most readily accessible non-volatile memory is EEPROM, being available as a
stand-alone chips, but almost all microcontrollers, even low-end ones also
have some EEPROM integrated among it's peripherals. Let's ignore other
types of non-volatile storage, notably FRAM for the sake of this article,
as other types of memory have different price and marker availability and
what is more important, they come with different set of limitations.</p>
<h2 id="limitations-of-eeprom">Limitations of EEPROM</h2>
<p>The main limitation of an EEPROM is its rather low number of rewrite
cycles, ranging from 100k to 1M rewrites <em>per-bit</em> usually. Yes, while
EEPROMs usually allow some kind of page access to streamline reading and
writing blocks of data, they allow to manipulate the data down to a single
bit, which is considered an advantage, but due to practical reasons we
ignore this property here either.</p>
<p>So how to overcome the limited number of write cycles on EEPROM? The
technique is called <em>wear leveling</em>. Wear leveling means adjusting the
write frequency for every part of the memory to be virtually the same. In
other words, writing to he same place again only after all the other places
were re-written in the meantime.</p>
<h2 id="eeprom-and-the-circular-buffer">EEPROM and the circular buffer</h2>
<p>Wear leveling provides some nice symbiosis for EEPROM and the circular
buffer data structure, as circular buffer does not need any data shifting
and it basically has the wear leveling built in. With the new data, just
move to the next entry and overwrite it, looping back to the first position
from the last one (thus circular). Easy, right?</p>
<p>Well not so simple, we still need to know which entry is the last written.
This is also called a <em>head</em>. The head position can be stored in RAM of
course. But with the power outage, the head would be lost. Even though our
data are stored in non-volatile memory, without the way of telling which
entry was the last, we would fail the ordering part of our first
requirement. Note that the circular buffer also stores the information
about it's last entry, called a <em>tail</em>, but we omit the tail for the
simplicity as well.</p>
<p>So how we go around this limitation? There also is quite a lot of useful
information about circular buffers written in the link below:</p>
<p><a href="https://betterembsw.blogspot.com/2015/07/avoiding-eeprom-wearout.html">https://betterembsw.blogspot.com/2015/07/avoiding-eeprom-wearout.html</a></p>
<p>The natural way to preserve the head position is to write it in EEPROM
instead of RAM, but where exactly? No matter where do we choose to store it
in EEPROM, it will be worn out at much faster rate that the other entries
of the circular buffer, as this one needs updating every time an entry is
pushed. Searching around the Internet, the solutions to this do not seem to
agree. The most notable solution is to use a <em>second</em> circular buffer for
storing the head, meaning our storage capacity is now reduced, in an
extreme case even halved, but the wear leveling is achieved and both our
requirements optimally met. Could it be done better?</p>
<h2 id="discarding-the-second-circular-buffer">Discarding the second circular buffer</h2>
<p>One particular
<a href="https://www.avrfreaks.net/comment/213726#comment-213726">comment on the AVRfreaks forum</a>
shows hints about the possibility of discarding the troublesome second
circular buffer while still providing wear leveling for the entire storage.</p>
<p>The idea is to rewrite not just the oldest entry with the most recent
value, but actually also provide a divider entry, with a known empty value
below it, effectively placing the divider between the head and the tail.
This way, to find the head, we have just to find that divider in the
circular buffer.</p>
<p>The details are more complex than this because of the initialization and
such, so go read the discussion if interested, but I wanted to point this
neat little idea out. Sometimes the most brilliant solutions are not the
most apparent. I must admit I really like this one, although I am not sure
if there are no gotchas in this approach. Hopefully I will soon find out.</p>
<p>This is a 63th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Wakeup Pro Micro 3.3V with DS3132 module2021-05-11T00:00:00+00:002021-05-11T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/wakeup-pro-micro-3v3-with-ds3231-module/<p>After a previous few
<a href="/blog/fix-platformio-avrdude-input-output-error/">days of struggling</a> to
get the Arduino Pro Micro 3.3V version to
<a href="/blog/use-pin-7-wakeup-arduino-pro-micro/">get to sleep and wakeup</a> I've
finally had an epiphany and got the desired result to surface up. It was
made possible by the massive amount of documented work I have finally
stumbled upon. Links are at the end of the post, as always. They also prove
that I am not the only one trying to build the battery powered data logging
devices. Also, both my academic theses were related to power management and
data logging, albeit years ago. They are public in the git section if
anyone is interested.</p>
<h2 id="connections">Connections</h2>
<p>The connection table and a neat Frtzing visualization of the connection can
be seen below:</p>
<table><thead><tr><th>Pro Micro pin</th><th>DS3132 module pin</th></tr></thead><tbody>
<tr><td>GND</td><td>GND</td></tr>
<tr><td>D2</td><td>SDA</td></tr>
<tr><td>D3</td><td>SCL</td></tr>
<tr><td>D4</td><td>VCC</td></tr>
<tr><td>D7</td><td>SQW/INT</td></tr>
</tbody></table>
<p><img src="https://github.com/peterbabic/bee-weighter/blob/0f98d2d6d6d618abed064cb12517899fa1520e9f/docs/4-strain_bb.jpg?raw=true" alt="The connection of the DS3132 module to the Arduino Pro Micro 3.3V" /></p>
<h2 id="code">Code</h2>
<p>The code that appears to be working, although not entirely battle-tested
yet. Note that Platformio was used to pull the dependencies and they are
specified in the
<a href="https://github.com/peterbabic/bee-weighter/blob/master/platformio.ini"><code>platformio.ini</code></a>.</p>
<p>The setup phase starts with driving the pin D4 high, powering the DS3132
chip. This starts its internal TXO oscillator if not started already and
sets the time and date. Then it enables the <strong>Alarm 1</strong> (seconds
resolution) to provide the interrupt on the SQW/INT in on the module, which
is connected to the D7. D7 is in turn the only non-communication pin on
ATmega32u4 that has the external interrupt, as discussed in the post I made
yesterday.</p>
<p>The loop part then again powers the DS3132 Real-Time Clock (RTC) chip via
D4, sets an alarm by <code>secondsTillNextWakup</code> seconds in the future and
proceeds by powering the Micro down to conserve as much power as possible.
When the alarm is reached, the interrupt is generated, waking up the Micro
again, which in turn configures another alarm more into the future and gets
back to sleep. The cycle repeats.</p>
<h2 id="notes">Notes</h2>
<p>There are a few points to consider still:</p>
<ol>
<li>I have to make actual measurements of the consumed current and do some
more evaluation under the oscilloscope. I'll do that as soon as I am
back in the lab.</li>
<li>The resistor network RP1 on the DS3132 module was removed due to
possibility of leakage current through the pullup it provides for SDA
ans SCL lines. The <code>Wire</code> library appears to be enabling Micro's
internal pullups for I2C communication anyway, but they are probably an
order of magnitude weaker than recommended (50k vs 5k). It appears to be
working anyway, despite the warnings. There are some suggestions this
modification might not even be needed for the 3.3V boards.</li>
<li>The I2C on DS3132 appears to be working even without driving the VCC pin
high via D4 <em>after</em> RTC is already running on the backup battery power.
This probably needs some further investigation. The datasheet says the
I2C can be running of the battery as well and would want to understand
the ways the current consumption can be optimized, since I have invested
time and effort in this topic already.</li>
</ol>
<p>This is a 62th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://forum.arduino.cc/t/ds3231-no-alarm-when-powered-off-why/334435/46">https://forum.arduino.cc/t/ds3231-no-alarm-when-powered-off-why/334435/46</a></li>
<li><a href="https://thecavepearlproject.org/2014/05/21/using-a-cheap-3-ds3231-rtc-at24c32-eeprom-from-ebay/">https://thecavepearlproject.org/2014/05/21/using-a-cheap-3-ds3231-rtc-at24c32-eeprom-from-ebay/</a></li>
<li><a href="https://batteryuniversity.com/index.php/learn/article/charging_lithium_ion_batteries">https://batteryuniversity.com/index.php/learn/article/charging_lithium_ion_batteries</a></li>
</ul>
Use pin 7 to wakeup an Arduino Pro Micro2021-05-10T00:00:00+00:002021-05-10T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/use-pin-7-wakeup-arduino-pro-micro/<p>When designing a circuit that is meant to run on low power, I have found
that is is a good start to choose a microcontroller board that is has a
lower base voltage level than 5V. The reasons are twofold. First, a lower
voltage means lower power consumed, period. Second, there are many other
boards and modules that are for 3.3V applications and using a 5V levels for
a supply or communications like UART, I2C or SPI can actually damage them
without other protective parts of the circuit.</p>
<p>An <a href="https://www.sparkfun.com/products/12587">Arduino Pro Micro 3.3V</a> sold
SparkFun is a good basis for the 3.3V designs. It uses an ATmega32U4
microcontroller running at 8MHz. The lower frequency further helps keep the
current drain low, although this only counts while the micro is not asleep.
Putting the micro into the sleep mode is almost a must with any application
running on limited power, especially when powered from a battery.</p>
<h2 id="external-interrupt-pins">External interrupt pins</h2>
<p>To wake the micro from the sleep, or power down mode based on some external
event, either the external interrupt pin or the reset pin can be used. One
notable example of using the reset pin was a TV-B-Gone circuit, but even it
<a href="https://github.com/adafruit/Adafruit_Learning_System_Guides/blob/d9656b8ba59532926240ddee50b81160e2e3fd11/Flora_TV_B_Gone/Flora_TV_B_Gone.ino#L480">seems to be using interrupts now</a>.
The boards based on ATmega328p have only two external interrupts, situated
on the digital pins 2 and 3. The ATmega32u4 however has three more external
interrupt pins, making up a total of five sources of external, possibly
non-periodical sources of wakeup. Two of the five are however used on the
UART pins. Using them complicates the serial communication and sketch
upload at the same time. For something so delicately working with the
serial connection as ATmega32u4 based boards, this is not a simple task.</p>
<p>Next two are used on the pins 3 and 2, similarly to the ATmega328p, but
with their numbering swapped. The swap wouldn't be a such a big deal, but
these two pins are used for I2C on the ATmega32u4. With the designs reliant
on I2C communication, this is also quite a problem. This one can be solved
with the software I2C library, but this always poses some downsides as
opposed to hardware peripherals.</p>
<p>There is however the fifth external interrupt on ATmega32u4 that is not
used for anything communication related, and it is a pin 7. The external
interrupt 4 is attached to it. The useful details can be seen from the
table below:</p>
<table><thead><tr><th>Board</th><th>int.0</th><th>int.1</th><th>int.2</th><th>int.3</th><th>int.4</th><th>int.5</th></tr></thead><tbody>
<tr><td>328p based (Uno, Ethernet)</td><td>2</td><td>3</td><td></td><td></td><td></td><td></td></tr>
<tr><td>2560 based (Arduino Mega)</td><td>2</td><td>3</td><td>21</td><td>20</td><td>19</td><td>18</td></tr>
<tr><td>32u4 based (Leonardo, Micro)</td><td>3</td><td>2</td><td>0</td><td>1</td><td><strong>7</strong></td><td></td></tr>
</tbody></table>
<p>The pin 7 in the table here is shown in bold. Note that ATmega2560 powering
Arduino Mega has one additional external interrupt source, but is shown in
the table only for comparison.</p>
<h2 id="waking-from-a-power-down-mode">Waking from a power down mode</h2>
<p>I have spent a considerable amount of time trying to make this procedure
work, following multiple forum posts and guides, but nothing seemed to work
for me. There is also a
<a href="https://github.com/rocketscream/Low-Power">Low-power</a> library that is
sometimes recommended and claims to support 324u but I could not make it
work for the purpose of using the pin 7 (an external interrupt number 4) to
initiate the wakeup.</p>
<p>Instead, I have modified a code from multiple posts found around
<a href="http://www.gammon.com.au">gammon.com.au</a>:</p>
<pre data-lang="cpp" style="background-color:#2b303b;color:#c0c5ce;" class="language-cpp "><code class="language-cpp" data-lang="cpp"><span style="color:#b48ead;">#include </span><span><</span><span style="color:#a3be8c;">Arduino.h</span><span>>
</span><span style="color:#b48ead;">#include </span><span><</span><span style="color:#a3be8c;">avr/sleep.h</span><span>>
</span><span>
</span><span style="color:#b48ead;">const int</span><span> wakeUpPin = </span><span style="color:#d08770;">7</span><span>;
</span><span style="color:#b48ead;">const int</span><span> ledPin = </span><span style="color:#d08770;">17</span><span>;
</span><span>
</span><span style="color:#b48ead;">void </span><span style="color:#8fa1b3;">wake</span><span>()
</span><span>{
</span><span> </span><span style="color:#bf616a;">sleep_disable</span><span>();
</span><span> </span><span style="color:#bf616a;">detachInterrupt</span><span>(</span><span style="color:#bf616a;">digitalPinToInterrupt</span><span>(wakeUpPin));
</span><span>}
</span><span>
</span><span style="color:#b48ead;">void </span><span style="color:#8fa1b3;">sleepNow</span><span>()
</span><span>{
</span><span> </span><span style="color:#bf616a;">set_sleep_mode</span><span>(SLEEP_MODE_PWR_DOWN);
</span><span> </span><span style="color:#bf616a;">noInterrupts</span><span>();
</span><span> </span><span style="color:#bf616a;">sleep_enable</span><span>();
</span><span> </span><span style="color:#bf616a;">attachInterrupt</span><span>(</span><span style="color:#bf616a;">digitalPinToInterrupt</span><span>(wakeUpPin), wake, LOW);
</span><span> </span><span style="color:#bf616a;">interrupts</span><span>();
</span><span> </span><span style="color:#bf616a;">sleep_cpu</span><span>();
</span><span>
</span><span>}
</span><span>
</span><span style="color:#b48ead;">void </span><span style="color:#8fa1b3;">setup</span><span>()
</span><span>{
</span><span> </span><span style="color:#bf616a;">pinMode</span><span>(wakeUpPin, INPUT_PULLUP);
</span><span>}
</span><span>
</span><span style="color:#b48ead;">void </span><span style="color:#8fa1b3;">loop</span><span>()
</span><span>{
</span><span> </span><span style="color:#65737e;">// Do something here
</span><span> </span><span style="color:#65737e;">// Example: Read sensor, data logging, data transmission.
</span><span> </span><span style="color:#bf616a;">pinMode</span><span>(ledPin, OUTPUT);
</span><span> </span><span style="color:#bf616a;">delay</span><span>(</span><span style="color:#d08770;">200</span><span>);
</span><span> </span><span style="color:#bf616a;">digitalWrite</span><span>(ledPin, HIGH);
</span><span> </span><span style="color:#bf616a;">delay</span><span>(</span><span style="color:#d08770;">500</span><span>);
</span><span> </span><span style="color:#bf616a;">digitalWrite</span><span>(ledPin, LOW);
</span><span> </span><span style="color:#bf616a;">delay</span><span>(</span><span style="color:#d08770;">200</span><span>);
</span><span> </span><span style="color:#bf616a;">pinMode</span><span>(ledPin, INPUT);
</span><span>
</span><span> </span><span style="color:#65737e;">// Now go to sleep
</span><span> </span><span style="color:#bf616a;">sleepNow</span><span>();
</span><span>}
</span></code></pre>
<p>The most critical modification here is in the <code>INPUT_PULLUP</code> pin mode.
Without the pullup with the pin dangling, the behavior of the micro in
relation to the sleep was very storage. On some pins the led was even
dimming. Using the internal pullup resistor proved to work reliably wakeup
the Arduino Pro Micro's ATmega32u4 by shorting the pin to the GND. It even
allows for uploading a new sketch without a problem, which is obviously not
possible when the micro is powered down.</p>
<p>Note that in case of problems, short RST pin to the ground once for a 750ms
window until the sleep initiates or twice for a 8s window, to load a new
sketch when you mess up and waking from sleep is not easy/possible.</p>
<p>This is a 61th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="http://ww1.microchip.com/downloads/en/devicedoc/atmel-7766-8-bit-avr-atmega16u4-32u4_datasheet.pdf">http://ww1.microchip.com/downloads/en/devicedoc/atmel-7766-8-bit-avr-atmega16u4-32u4_datasheet.pdf</a></li>
<li><a href="http://www.gammon.com.au/interrupts">http://www.gammon.com.au/interrupts</a></li>
<li><a href="http://www.gammon.com.au/power">http://www.gammon.com.au/power</a></li>
<li><a href="https://forum.arduino.cc/t/isr-and-attachinterrupt-statement-to-toggle-power/568868/19">https://forum.arduino.cc/t/isr-and-attachinterrupt-statement-to-toggle-power/568868/19</a></li>
<li><a href="https://learn.sparkfun.com/tutorials/pro-micro--fio-v3-hookup-guide/all#example-2-hid-mouse-and-keyboard">https://learn.sparkfun.com/tutorials/pro-micro--fio-v3-hookup-guide/all#example-2-hid-mouse-and-keyboard</a></li>
<li><a href="https://www.arduino.cc/reference/en/language/functions/external-interrupts/attachinterrupt/">https://www.arduino.cc/reference/en/language/functions/external-interrupts/attachinterrupt/</a></li>
</ul>
Fix platformio avrdude input/output error2021-05-09T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/fix-platformio-avrdude-input-output-error/<p>I have get to work on the very low priority side project with the codename
<a href="https://github.com/peterbabic/bee-weighter">bee weighter</a> yesterday. It is
not meant to weight the actual bees, but instead their entire house to
determine if it is a full of honey that could be extracted.</p>
<p>However, I got greatly slowed down by the unfortunate issue that manifests
itself as a following error in its entirety:</p>
<p><code>avrdude: ser_open(): can't open device “/dev/ttyACM0”: Input/output error</code></p>
<p>There is of course a
<a href="https://unix.stackexchange.com/questions/645033/arduino-avrdude-ser-open-cant-open-device-dev-ttyacm1-input-output-err">Unix Stack Exchange thread</a>
discussing this, but the best thing I could extract off it was a sort of a
workaround:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> modprobe</span><span style="color:#bf616a;"> -r</span><span> cdc_acm && </span><span style="color:#bf616a;">sudo</span><span> modprobe cc_acm
</span></code></pre>
<p>I had to run this every time before any command that was doing something
with the <code>/dev/ttyACM0</code> port, be it uploading a sketch into the Arduino Pro
Micro 3.3V/8MHz from SparkFun or accessing it's serial port, to either
check the weight data or the datetime from the Real-Time Clock (RTC) chip.
The only exception when the <code>modprobe</code> was not needed was if the serial
port accessing commands were fired in a very short succession. It was very
frustrating.</p>
<p>At first, I thought that maybe the <code>linux-zen</code> kernel I
<a href="/blog/install-fdroid-arch-via-anbox/">use to run F-Droid apps on Linux</a>
was a problem, and some searches were suggesting that a custom kernel might
be an issue, but booting a regular kernel shipped with Arch did not solve
the issue. I dual-booted to Windows 10 and everything worked there, just to
be sure.</p>
<p>Then I have stumbled upon
<a href="https://www.reddit.com/r/archlinux/comments/mqovt5/arduino_avrdude_ser_open_cant_open_device/gw3ji54?utm_source=share&utm_medium=web2x&context=3">this comment in the Reddit thread</a>
discussing this problem too, and it solved the issue for me.</p>
<p>Long story short, the thread discussed that the problem is probably kernel
related, pointing out kernel versions that exhibit this particular issue,
but the solution for me as well was disabling an USB autosuspend feature.
It is possible to disable an USB autosuspend from at least three places:</p>
<ol>
<li>Via the bootloader kernel parameter (see Stack Exchange link above)</li>
<li>Via the <code>udev</code> rules (see links at the bottom)</li>
<li>Via <code>tlp</code> configuration (the solution from Reddit thread above)</li>
</ol>
<p>I had <code>tlp</code> already installed. It is used for power management in Linux.
The solution is to edit <code>/etc/tlp.conf</code>, uncomment and edit
<code>USB_AUTOSUSPEND=0</code>. After reboot, no the problem was gone.</p>
<p>Note that this probably means that the laptop might last a little less on
batter power and a specific <code>udev</code> rule for this might be more optimal, I
could not get it to work this way, but would love to see such solution.</p>
<p>This is a 60th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
<h2 id="links-for-udev">Links for udev</h2>
<ul>
<li><a href="https://hamwaves.com/usb.autosuspend/en/">https://hamwaves.com/usb.autosuspend/en/</a></li>
<li><a href="https://koen.vervloesem.eu/blog/disable-usb-autosuspend-for-specific-devices/">https://koen.vervloesem.eu/blog/disable-usb-autosuspend-for-specific-devices/</a></li>
</ul>
Insights from the Google Search Console2021-05-08T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/insights-google-search-console/<p>For now, I have enabled the Google Search Console on this blog, mainly
because I do not have a search yet, but I like to reference my previous
posts related to the topic at hand. And for now, I have been using Google
search to quickly find where I was writing about this or that.</p>
<p>The problems started when I have found out that some posts, especially ones
that I have put a lot of energy into, were not showing up. Well, why should
I care? I do care about security and privacy, why should I worry sense of
what what Google thinks? Google is not respecting privacy of its users so
their opinion should not matter.</p>
<h2 id="google-search-console">Google Search Console</h2>
<p>Google Search Console is a tool that aids with multiple things, like
importing a sitemaps, detecting problems like content being outside of the
view on the mobile devices, removing pages from the search results, and of
course finding out reasons why some page is not indexed. And sometimes it
even is indexed but something else prevents it from displaying it in the
search results. And there are tons of possible reasons apparently.</p>
<p>Before the Console shows anything useful however, I have to prove the
ownership of the domain. It can be done by multiple ways like making a
readable file accessible from the web or setting a domain TXT record and
some of them are quite similar that the ways ACME checks the domain
ownership before issuing a TLS certificate. I have
<a href="/blog/wildcard-certificate-acme-sh/">written something about it</a> already
as well.</p>
<p>Another way to enable the Console functionality is to turn on Google
Analytics, for instance by inserting a line to the HTML source. I wanted to
avoid precisely this option to preserve the privacy of anyone stumbling to
the site. In the end, I started writing for myself, without the goal of
including advertising, so the analytics were not needed for targeting
anyway.</p>
<h2 id="txt-record-enables-analytics-too">TXT record enables Analytics too</h2>
<p>I have chosen TXT record due to it's simplicity, it's non-intrusiveness
from the perspective of code version control (no files changed whatsoever)
and its instant coverage for the domain and subdomains as well.</p>
<p>Unbeknown to be, this option turns on the Analytics anyway, without the
apparent option to turn this off. Please check if it is still turned on
anytime via:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">dig</span><span> peterbabic.dev TXT | </span><span style="color:#bf616a;">grep -i</span><span> google
</span></code></pre>
<p>If the output shows the string <code>google-site-verification</code>, the Analytics
are still turned on, so beware.</p>
<h2 id="indexed-but-unlisted-pages">Indexed but unlisted pages</h2>
<p>With the Console activated, I went probing for the pages missing in the
search results. Most of them had no actual problem with them, they were
crawled already (visited by the Google bot) and were included in the index.
The reason around 28 pages, which at the time of writing were about the
third of the all posts were not showing up was stated as
<code>page with redirect</code>. I knew what a redirect is, but I had no idea what it
meant in this context.</p>
<p>Like was I somehow setting the permanent 301 redirect or a temporary 302
HTTP status code redirect somewhere in my application? And if so, then
where? It is in a Nginx configuration or rather is the Single Page App
(SPA) router responsible? These were the questions I did not know I need
answers for and would be still blindly ignorant against, had I not turned
the Console on.</p>
<h2 id="nginx-or-spa-router-to-blame">Nginx or SPA router to blame</h2>
<p>I spent the better part of the evening probing the Nginx configuration as
it was much shorter than the app's code, so the problem there could be
ruled out much sooner. Unfortunately, this went nowhere. Or it rather led
to the conclusion that the problem is definitely not in the configuration
of Nginx. The problem had to lie in the page router.</p>
<p>The router in the SPA is mapping patterns in the URL to the different parts
of the application. This means that even though the app is only the single
page, router still manages to change the URL after clicking on the link.
This makes sure that when refreshing the page, users land where they were
before the refresh. Without router, the URL would not change and it would
always be just the domain name like <code>https://example.com</code>, meaning every
time a user refreshes, they land on the root page.</p>
<p>This routing is essential also for Search Engine Optimization (SEO). For
SEO to be able to crawl, index and list the page, it has to have it's own
unique URL. The possibility of just clicking around and get to the desired
content in the application is not sufficient for SEO, an unique URL is a
necessary condition.</p>
<h2 id="trailing-slash-inconsistency-with-spa">Trailing slash inconsistency with SPA</h2>
<p>Since this routing is done in the application code, the redirects are
buried there as well, and therein lies the problem. If the router is
exhibiting an <em>inconsistent</em> behavior, it is not good for the SEO. The
issue I have discovered I am dealing with is the inconsistency in the
trailing slash.</p>
<p>The router was creating the URLs in a way that they lacked the trailing
slash:</p>
<p><code>https://example.com/blog/my-glorious-unlisted-post</code></p>
<p>Also, all the links on the page were constructed in precisely the same way.
But, the full URLs are still being 301 redirected, most probably by the
router, to the version with the trailing slash at the end:</p>
<p><code>https://example.com/blog/my-glorious-unlisted-post/</code></p>
<p>These two addresses look the same but are in fact different from the
perspective of the search engine. In the past, the convention was, that an
URL without the trailing slash represents a file, while the one with the
trailing slash represents the folder. This was very similar to the file
browsers at the time.</p>
<p>That some URL represents a file was also supported by the fact it was
showing an extension, for instance <code>.html</code>, but displaying extensions in
the URL is quite gone these days and listing contents of the folder
directly is also not a part of the SEO strategy, because listing contents
of the folder only lists more folders and file names, which is not really
that much useful content to index.</p>
<p>So it made sense in the past for the two addresses to display different
content (the content of the file, even without the extension in the first
example, while the contents a folder in the second one). We now want the
two addresses to represent the exact same content, and for SEO to be aware
of this, there should be consistency in the redirecting and building links.</p>
<h2 id="conclusion">Conclusion</h2>
<p>I have turned on the functionality the Google Search Console is offering by
proving the ownership of the domain by including the TXT record among other
DNS records. I have inadvertently turned on Google Analytics by this, and I
feel like being cheated, because it looks like I have no option to turn the
Analytics off, or at least not with the DNS option, while at the same time
still using the other Console features, mainly checking if individual links
are being indexed.</p>
<p>By checking the details of the links I have found I have inconsistency in
the trailing slash redirects I did not know before and that most probably
the router in the SPA is responsible. With no easy fix in sight, I am
keeping this issue in my backlog.</p>
<p>The is a 59th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Using UUID in an Atom feed2021-05-07T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/using-uuid-in-atom-feed/<p>This a again blend of my
<a href="/blog/my-blog-has-feed-now/">previous post announcing a feed</a> and a
<a href="/blog/cheatsheet-uuid/">post before discussing an uuid commad</a>. The fact
these two came one after each other is not coincidence, even though they
seem to have very little in common. So what is the connection between the
Atom feed and the Universally Unique IDentifiers (UUID)?</p>
<h2 id="atom-id-specification">Atom ID specification</h2>
<p>In the <a href="https://tools.ietf.org/html/rfc4287#page-19">RFC4287</a>, an IETF
standard describing the Atom format specification, the <strong>atom:id</strong> element
is defined as:</p>
<blockquote>
<p>The "atom:id" element conveys a permanent, universally unique identifier
for an entry or feed. Its content MUST be an IRI, as defined by
<a href="https://tools.ietf.org/html/rfc3987">RFC3987</a>.</p>
</blockquote>
<p>What does IRI stands for? Well, there is a lot of reading to find out, but
for all the practical reasons, it boils down to the fact that the IRI is a
supreset of Uniform Resource Identifier (URI) , meaning that every URI is
also an IRI, but only some IRI is also an URI. The difference lies in the
character encoding set, IRI uses an extended one.</p>
<p>Now we now that an ID used in an Atom feed can be anything that we would
call an URI. An URI, for the use in an Atom feed can be further broken down
to two groups, an Uniform Resource Locator (URL) and a less known, Uniform
Resource Name (URN). URI is a superset for both URL and URN, meaning that
every URL is also an URI and similarly, every URN is also an URI. URN and
URL are are disjoint however, meaning no URL is an URN at the same time and
vice-versa. Or, put in other words, an Atom ID can either be an URL or an
URN.</p>
<h2 id="url-vs-urn-which-one-to-choose">URL vs URN: which one to choose?</h2>
<p>While choosing an URL for an Atom ID complies with the standard, it has
some disadvantages. Specifically, sometimes the URL changes, and it happens
far
<a href="https://en.wikipedia.org/wiki/Link_rot">more often than you may think</a>,
although there appears to be no conclusive number yet.</p>
<p>As cited above, the Atom ID element has to be <strong>permanent</strong> and for that
matter, an URN is far better suited. If translated into practice, using URN
would mean that the RSS or Atom feed client would keeps the post in a read
and in favorites, even when the blog moves to a different domain, changing
it's URL in the process. In reality, the number of clients ranges between a
plethora and superfluity and every one of them does something a little bit
differently, but this is what the theory says.</p>
<h2 id="does-urn-need-to-be-registered">Does URN need to be registered?</h2>
<p>For a majority of URNs, for instance when referring to an International
Standard Book Number (ISBN) in the form of <code>urn:isbn:</code>, the URN has to be
registered by a central authority. But there is at least one such URN
group, that does not need a registration and that is the UUID group, under
the handy <code>urn:uuid:</code> designation. This is where the URN and UUID overlap
and form a useful partnership for identifying a resources long after their
original location is gone.</p>
<p>For a moment, imagine a scenario where some future archaeologist finds an
USB stick with the feed file containing the whole Atom feed of your blog.
The technology by then could for instance use some
<a href="https://www.nature.com/articles/d41586-021-00534-w">biologic mechanism to store data</a>
instead of using silicon, but they might still want to examine the data. If
the posts were identified by the URL's, it could be hard to link them to
the data in some global database, as domain names expire every year.
Matching could be done by comparing the contents to everything in the
database. Using some defined identifier such as UUID on the other hand
could be easily matched against the database records and even UUID
collisions, while improbable today, but very possible in the future, could
be sorted out easily by comparing the contents of entries against all
matched UUIDs. In such scenario, using URNs would reduce the search space
drastically by the very least, from comparing the whole database down to
just comparing UUID collisions.</p>
<h2 id="which-uuid-version-to-use">Which UUID version to use?</h2>
<p>There are two places where an Atom ID is specified in the feed. One is for
identifying the feed itself while other is for identifying individual
entries. Both IDs benefit from using UUID URN for their value. We have
already learned in the <a href="/blog/cheatsheet-uuid/">cheatsheet</a> that there are
5 specified UUID versions, of which 3 are available / recommended in new
designs. So which one to choose and where? This is actually where it gets
pretty hairy.</p>
<h3 id="uuid-version-4-for-the-entries">UUID version 4 for the entries?</h3>
<p>The <a href="https://tools.ietf.org/html/rfc4287#page-3">specification</a> does only
mention UUID in one example and it is a UUID version 1 for the feed itself
and UUID version 4 for the entries. All the other sources I could find are
very vague on this topic, but generally, using UUID version 4, which has a
property of being completely <strong>random</strong> is very common for the entries.
This approach implies, that such is UUID generated when the entry is first
stored in the database and stored along the entry itself and not changed
afterwards.</p>
<p>An identical approach is used elsewhere, for instance for a
<a href="https://wiki.archlinux.org/title/Persistent_block_device_naming#by-uuid">persistent block device naming</a>,
which means that when you turn on the computer, an operating system starts
from the same disk every single time. Generating UUIDs for the block
devices in your system once and referring them by this ID later prevents a
so called race condition error, which would in this example happen when
some device got loaded sooner than usual obtaining a wrong identifier,
resulting in occasional failures during system startup.</p>
<h3 id="uuid-version-1-for-the-feed">UUID version 1 for the feed?</h3>
<p>I could not understand why UUID version 1 was used for the feed in the
specification. Even worse, UUID version 1 has specific use cases and it
seems to me that it's usage is discouraged as a safety concern unless it's
precise property of <strong>predictability</strong> and to a less extend, a
<strong>sequentiality</strong> is required. Wrongly identifying a feed has no security
concerns I could think of, so an UUID version 1 can work perfectly here. In
the end, even the blog URL can be put there. I have decided for a different
approach however.</p>
<h2 id="approaches-for-feeds-uuid-generation">Approaches for feeds UUID generation</h2>
<p>There are multiple ways I could think of to generate the UUID for the feed
that I was considering:</p>
<ol>
<li>Use UUID version 1 and store it</li>
<li>Use UUID version 4 and store it</li>
<li>Use UUID version 5 with the DNS or URL namespace prefix and the blog's
domain as a name</li>
<li>Use UUID version 5 with the NIL UUID as namespace prefix and the blog's
URL as a name</li>
</ol>
<p>Since the UUID version 5 has the property of <strong>reproducibility</strong>,
approaches 3 and 4 would serve, if I were to generate the UUID multiple
times. With the same input, UUID version 5 provides the same output. This
would however not work if the domain (the input data) changed, completely
defying the purpose of permanent identification. This means approaches 1
and 2 when the UUID is generated once, stored and used afterwards is
preferable. Ruling out approach 1 already, I was left with <strong>using UUID
version 4, storing it in a code and using it as a feed ID</strong>.</p>
<h3 id="uuid-version-5-namespaces">UUID version 5 namespaces</h3>
<p>The reproducibility property of the version 5 UUID is however very useful
for the statically generated blogs, especially the ones that are completely
git powered, meaning without the database. I for instance retrieve
publication and modification dates
<a href="/blog/how-commit-history-tells-when-post-published/">from the git commit history</a>
and even following <a href="/blog/following-renames-in-gitlog/">file renames</a>. I
have also found that Hugo, another static site generator, popular for
blogging can be
<a href="https://mertbakir.gitlab.io/hugo/last-modified-date-in-hugo/">configured to do the same</a>,
so this approach is probably not too far-fetched.</p>
<p>Since I have no way of storing the generated version 4 UUID in a database
as there is none, I could only store it in the post markdown file itself,
most conveniently in the Front Matter section. I am a lazy person as I do
not store dates in the Front Matter manually either, as pointed above.
Automating everything is a challenge, but it's paying off with the
increasing frequency of automated event happening (search also for <em>geeks
and repetitive tasks</em>).</p>
<h2 id="uuid-version-5-for-entries">UUID version 5 for entries</h2>
<p>With the above in mind, I got to generate a reproducible UUID version 5 for
the entry ID. As we have already learned, version 5 UUID requires two
pieces of input data - the UUID prefix and a name. For the prefix I have
chosen the UUID version 4 for the feed itself and for the name I have
chosen the hash of the commit the post was introduced with.</p>
<p><strong>Atom entry UUID version 5</strong> = <strong>Atom feed constant UUID</strong> as a prefix +
<strong>git commit hash</strong> as a name</p>
<p>This way, the entry ID is guaranteed to be generated the same every time,
unless I change the feed UUID, which I have no reason for doing so and it
is also stored in the version history to prevent loss or unless I rewrite
the git commit history, which should generally always be avoided at all
costs.</p>
<p>That's it. As a side note, I was considering using a posts slug (which in
my setup is a post's filename without the <code>.md</code> extension, another think
that I do not store in the Front Matter), but slugs do change very rarely
for some SEO modifications. In my setup, as already pointed out, the
renames are followed, so the dates would not get disrupted, but the hash of
the commit that introduced the file, even before the renaming does not
change, as long as the rename gets recognized by git itself.</p>
<p>This is a 58th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="http://blevs.github.io/pollen-feed-tutorial/#%28part._.U.U.I.D_.U.R.N%29">http://blevs.github.io/pollen-feed-tutorial/#%28part._.U.U.I.D_.U.R.N%29</a></li>
<li><a href="https://en.wikipedia.org/wiki/Uniform_Resource_Name">https://en.wikipedia.org/wiki/Uniform_Resource_Name</a></li>
<li><a href="https://fusion.cs.uni-jena.de/fusion/blog/2016/11/18/iri-uri-url-urn-and-their-differences/">https://fusion.cs.uni-jena.de/fusion/blog/2016/11/18/iri-uri-url-urn-and-their-differences/</a></li>
<li><a href="https://stackoverflow.com/questions/10867405/generating-v5-uuid-what-is-name-and-namespace">https://stackoverflow.com/questions/10867405/generating-v5-uuid-what-is-name-and-namespace</a></li>
<li><a href="https://stackoverflow.com/questions/7724903/where-do-uuid-namespaces-come-from">https://stackoverflow.com/questions/7724903/where-do-uuid-namespaces-come-from</a></li>
</ul>
My blog has a Feed now!2021-05-06T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/my-blog-has-feed-now/<p>A few days ago, <a href="https://rusingh.com/">Ru</a> asked me if the RSS on my blog
is available.</p>
<blockquote>
<p><strong>Hint:</strong> it wasn't.</p>
</blockquote>
<p>Go check Ru's work by the way. She is actually investing a lot of thoughts
into whatever she's doing, so expect to find something worthwhile around
her.</p>
<p>But the fact is, this blog's feature list is very slim, mainly because I am
trying to build the blog without the database - it is based entirely around
git and generated statically. The resulting workflow of such setup suits my
command line addiction, but I have to balance between hacking on the blog
and creating the actual content. You may guess which one of these two
activities I totally avoid most of the year.</p>
<p>Anyway, an Atom or RSS feed is a pretty indispensable feature, so I went on
squeezed some time in for an actual implementation, as I there were no
excuses left for not having one at this point.</p>
<h2 id="implementation-insights">Implementation insights</h2>
<p>The bulk of the work for RSS on Sapper (a part of Svelte ecosystem) has
been documented at <a href="https://lacourt.dev/2019/06/29">lacourt.dev</a> and it was
tailored for Atom further at
<a href="https://dev.to/cleverguy25/rss-atom-and-site-map-for-svelte-sapper-blog-part-3-e0">dev.to</a>.
I have chosen to implement Atom over RSS 1.0 or RSS 2.0, because Atom is a
defined <a href="https://tools.ietf.org/html/rfc4287">IETF standard</a> meaning it is
less likely to change unexpectedly, providing a potential for a long term
stability.</p>
<p>The structure of the RSS file is XML, meaning it will not readily accept
HTML. My HTML is automatically generated, so I tried tweaking some
parameters, trying to make the generated HTML as compliant at as possible.
After I resolved chain of few errors I got stuck, so I went looking for a
different way.</p>
<p>The HTML can of course be included in XML and there are at least two common
ways to do so. One is to encode the content in Base64 encoding and the
other is to use CDATA tag. I went with CDATA as I could not find if RSS
clients decode Base64 automatically or there is some way to tell them to do
so. Also, way CDATA is more verbose, meaning expecting the generated RSS
file manually I can see what is going on straight away.</p>
<p>I have also discovered that there is a library for feeds called
<a href="https://github.com/syntax-tree/xast-util-feed"><code>xast-util-feed</code></a> already
and it is a part of <a href="https://unifiedjs.com/">unified.js</a> ecosystem. I use
many components they provide for markdown to HTML transformation and this
could be used readily as well. I might use that in the future as a part of
optimizations. Currently the file itself is probably over 1 MB large and it
is shipping all the CSS classes and other similar stuff that might not be
needed, so there is definitely some room for improvement.</p>
<p>Finally, I did provide categories as a concatenated string for now as many
my posts do not have a clear category and I plan to rework that a little
bit in the near future, so expect some changes in this area as well. Also
deciding if I should provide only excerpt of the full post was easy for me
and Kev covered it on
<a href="https://kevq.uk/why-having-a-full-post-rss-feed-is-a-good-idea/">his blog post</a>
precisely, clearing any doubts.</p>
<p>The feed should be automatically discovered via <code>peterbabic.dev</code>, but if
not, the route to it is at <a href="https://peterbabic.dev/atom.xml">https://peterbabic.dev/atom.xml</a>. Enjoy!</p>
<p>This is a 57th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
<h2 id="links">Links</h2>
<ul>
<li>https://stackoverflow.com/questions/4412395/is-it-possible-to-insert-html-content-in-</li>
<li>https://validator.w3.org/feed/docs/atom.html</li>
<li>https://www.mnot.net/rss/tutorial/</li>
</ul>
Cheatsheet: uuid2021-05-05T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/cheatsheet-uuid/<p>Today I've learned about the different versions of Universally Unique
IDentifiers (UUID) and decided to compile a short cheatsheet for the <code>uuid</code>
command supplied by the
<a href="http://www.ossp.org/pkg/lib/uuid">Open Source Software Project (OSSP)</a>.
Note that there is also <code>uuidgen</code> command that is supported by the
<a href="https://en.wikipedia.org/wiki/Util-linux"><code>util-linux</code></a> but I find the
OSSP's version command parameters easier to remember.</p>
<ul>
<li>Generate a version 1 UUID which is based on time and system's hardware
address, if present. You probably
<a href="https://tools.ietf.org/html/rfc4122#section-6">do not want to use this option</a>
for a security related tasks due to it's possible <strong>predictability</strong>:</li>
</ul>
<p><code>uuid</code></p>
<ul>
<li>Generate a version 4 UUID, which is based on random data. This is the
most generic option without any security considerations or requirements.
It's main property is <strong>randomness</strong>:</li>
</ul>
<p><code>uuid -v4</code></p>
<ul>
<li>Generate a version 5 UUID, which is based on the supplied object name
with a specified namespace prefix while using a SHA1 hash function. It's
main property is <strong>reproducibility</strong>:</li>
</ul>
<p><code>uuid -v5 ns:DNS|URL|OID|X500 object_name</code></p>
<p>This is the option that took me the most time to understand. The topic is
quite broad and would require another post. For now, version 5 UUID is used
primarily as a Uniform
<a href="https://en.wikipedia.org/wiki/Uniform_Resource_Name">Resource Name (URN)</a>.
URNs are meant to be persistent identifiers, meaning they are available
long after the resource they identify is no longer available.</p>
<h2 id="other-uses">Other uses</h2>
<ul>
<li>Generate multiple version 4 UUID identifiers at once:</li>
</ul>
<p><code>uuid -v4 -n count</code></p>
<ul>
<li>Generate a version 4 UUID and specify the output format, useful when
requiring a binary or Single Integer Value (SIV) representation is
required, as opposed to default well known string representation:</li>
</ul>
<p><code>uuid -v4 -F BIN|STR|SIV</code></p>
<ul>
<li>Generate a UUIDv4 and write the output to a file:</li>
</ul>
<p><code>uuid -v4 -o path/to/file</code></p>
<ul>
<li>Decode a given UUID:</li>
</ul>
<p><code>uuid -d uuid</code></p>
<p>Although it is by design not possible to trace to the source of the
information just by looking at the UUID <em>directly</em>, decoding can be useful
when debugging the application and not many command-line tools provide this
functionality, so it is worth keeping this in mind.</p>
<h2 id="what-about-version-2-and-version-3">What about version 2 and version 3?</h2>
<p><strong>Version 2</strong> UUID is reserved for internal use only and not readily
available.</p>
<p><strong>Version 3</strong> is still supported and readily available, but not covered
here. It has the same properties as version 5 but it is using a MD5 hash,
which is already considered to be cryptographically broken. Version 3 is
thus not recommend to use in new designs.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://manned.org/uuid.1">https://manned.org/uuid.1</a></li>
<li><a href="https://stackoverflow.com/questions/20342058/which-uuid-version-to-use">https://stackoverflow.com/questions/20342058/which-uuid-version-to-use</a></li>
<li><a href="https://tools.ietf.org/html/rfc4122">https://tools.ietf.org/html/rfc4122</a></li>
<li><a href="https://www.uuidtools.com/uuid-versions-explained">https://www.uuidtools.com/uuid-versions-explained</a></li>
</ul>
<p>This is a 56th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Markdown posts by word count in bash2021-05-04T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/markdown-posts-word-count-bash/<p>I wanted to quickly overview the word count on my blog posts to roughly
calculate the possible translation count and here's a one-liner I have come
up with:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">find</span><span> .</span><span style="color:#bf616a;"> -maxdepth</span><span> 1</span><span style="color:#bf616a;"> -type</span><span> f</span><span style="color:#bf616a;"> -name </span><span>"</span><span style="color:#a3be8c;">*.md</span><span>"</span><span style="color:#bf616a;"> -exec</span><span> printf "</span><span style="color:#a3be8c;">{} </span><span>" </span><span style="color:#96b5b4;">\;</span><span style="color:#bf616a;"> -exec ~</span><span>/.local/bin/mwc {} </span><span style="color:#96b5b4;">\; </span><span>| </span><span style="color:#bf616a;">awk </span><span>'</span><span style="color:#a3be8c;">{print $2 " " $1}</span><span>' | </span><span style="color:#bf616a;">sort -rnk1
</span></code></pre>
<p>The output should look similar to this:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>1862 ./becoming-better-presentation-creator.md
</span><span>1739 ./make-ssh-prompt-password-keepassxc.md
</span><span>1619 ./are-otp-secrets-stored-plaintext.md
</span><span>1602 ./how-not-create-node-executable-arm.md
</span><span>1596 ./three-reasons-spent-time-nature-programmer.md
</span><span>1536 ./keep-gnome-shell-settings-dotfiles-yadm.md
</span><span>1407 ./how-update-gooogle-calendar-pre-push-hook.md
</span><span>1390 ./story-about-nfc-thinkpad-t470.md
</span><span>1211 ./building-on-your-previous-work.md
</span><span>1179 ./lockdown-travel-sms-sync-phone-reset.md
</span><span>1038 ./most-useful-keyboards-android.md
</span><span>1033 ./how-use-flashrom-archlinux-arm.md
</span><span>...
</span></code></pre>
<p>The <code>mwc</code> command should exclude punctuation, footnotes or other markdown
specialties but I did not do any extensive research yet. It should be
however possible to draw a general conclusion about the translation costs.
I am wondering if translators are accustomed to translate markdown already.</p>
<h2 id="requirements">Requirements</h2>
<p>The above line requires <code>mwc</code> command, a python
<a href="https://github.com/gandreadis/markdown-word-count">markdown-word-count</a>
script. Install via pip:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pip3</span><span> install markdown-word-count
</span></code></pre>
<p>Apart from the script, the line only requires standard GNU commands.</p>
<h2 id="links">Links</h2>
<ul>
<li>Passing <code>ls</code> output into <code>xargs</code> can introduce many security risks
<a href="https://stackoverflow.com/questions/6958689/running-multiple-commands-with-xargs">link</a></li>
<li>It might be better to consider using <code>find -exec</code> instead
<a href="https://askubuntu.com/a/1072092/350681">link</a></li>
<li>There are unavoidable security problems surrounding use of the <code>-exec</code>
action; you should use the <code>-execdir</code> option instead
<a href="https://man.archlinux.org/man/core/findutils/find.1.en#ACTIONS">link</a></li>
<li>Simply passing multiple <code>-execdir</code> parameters to <code>find</code> is sufficient
<a href="https://stackoverflow.com/a/6043896/1972509">link</a></li>
<li>Narrowing results of the <code>find</code> command is optional
<a href="https://stackoverflow.com/a/10523492/1972509">link</a></li>
<li>Using <code>awk</code> for swapping columns is very easy
<a href="https://stackoverflow.com/questions/11967776/swap-two-columns-awk-sed-python-perl#comment89102501_41037458">link</a></li>
<li>Sorting the output via the column is specified via <code>-k</code> parameter
<a href="https://stackoverflow.com/questions/6438896/sorting-data-based-on-second-column-of-a-file">link</a></li>
</ul>
<p>This is a 55th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
On federated code hosting2021-05-03T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/on-federated-code-hosting/<p>I have written a little about the possibility of Single-Sign-On (SSO) for
Gitea <a href="/blog/release-gitea-1-14-0/">getting closer to being a reality</a>.
Today I have stumbled upon a relatively different approach that I have
found interesting: federated code hosting.</p>
<p>While still in the draft phase, the concept seems really intriguing. The
same way Pleroma, Mastodon or
<a href="https://fediverse.party/en/fediverse">a handful of other services</a>
federate (roughly meaning exchange data) with each other using an
<a href="https://www.w3.org/TR/activitypub/">ActivityPub protocol</a>, the same
general idea could be brought to GitLab, gogs, Gitea and quite possibly to
other code hosting platforms too. Maybe even not a git-based version
control systems could be included for a maximum collaboration, as once the
protocol is agreed upon, the implementation is easier. This all would
require some ActivityPub protocol extensions, but would bring in the
advantages of self-hosted platforms, in this case the code hosting ones.</p>
<p>Such approach would be decentralized and generally very opposed to the
Single-Sign-On one briefly discussed before, as SSO is inherently
centralized. While both approach would bring the possibility of a single
identity over multiple networks, the federated code hosting goes far
beyond, including, but not limited to new ticket federation, ticket comment
federation, federated push activities and federated repository following
among few proposed features.</p>
<p>I am a big fan of decentralized and self-hosted applications, self-hosting
not just Pleroma but Gitea too and a handful of other services with related
properties. That's why the idea of both, the code hosting and the
federation at the same time is very appealing to me. But unlike to SSO,
which seems almost already available for Gitea, the federation is too far
off. The <a href="https://notabug.org/peers/forgefed/issues">issues at Forgefed</a>
look like they stopped to halt around 9 months ago, at the time of writing
this post.</p>
<p>I would not be surprised if some other, probably completely unrelated,
parallel development was happening somewhere else, but I have just
discovered this very concept, so maybe I will probably learn more as the
time goes on. Hopefully it won't take too long.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://feneas.org/federated-code-hosting/">https://feneas.org/federated-code-hosting/</a></li>
<li><a href="https://forgefed.peers.community/">https://forgefed.peers.community/</a></li>
<li><a href="https://notabug.org/peers/forgefed">https://notabug.org/peers/forgefed</a></li>
</ul>
<p>This is a 54th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Smarter global search for vim and fzf2021-05-02T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/smarter-global-search-fzf-vim/<p>The first jab I have received yesterday removed most of my productivity
today as I was shackled to the bed up until the very evening. Here are two
more improvements for my vim fzf series described most recently in the
<a href="/blog/smart-global-search-fzf-vim/">previous post</a> and some other posts
mentioned there, tracking my overall progress on the issue. The current
setup is quite useful but it can be improved even further.</p>
<h2 id="differentiate-between-global-and-local-search">Differentiate between global and local search</h2>
<p>It is possible to force some fzf actions to operate on the global home
scale and some on the current working directory in a sort of a hybrid
approach. I have configured it in <code>.zshrc</code> like this:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#b48ead;">export </span><span style="color:#bf616a;">FZF_DEFAULT_COMMAND</span><span>="</span><span style="color:#a3be8c;">fd --type f</span><span>"
</span><span style="color:#b48ead;">export </span><span style="color:#bf616a;">FZF_CTRL_T_COMMAND</span><span>="</span><span style="color:#a3be8c;">fd --type f . --full-path </span><span>$</span><span style="color:#bf616a;">HOME</span><span>"
</span><span style="color:#b48ead;">export </span><span style="color:#bf616a;">FZF_ALT_C_COMMAND</span><span>="$</span><span style="color:#bf616a;">FZF_CTRL_T_COMMAND</span><span>"
</span></code></pre>
<p>This way with all the previous setup in place, I can edit files in the
current working directory running <code>gf</code> in the terminal or pressing this
sequence in vim, while still access all folders and files in my home by
pressing ALT+C and CTRL+T, which I find myself using less often, especially
with the <code>z</code> jumping utility available.</p>
<h2 id="global-local-search-in-vim-as-well">GLobal / local search in vim as well</h2>
<p>The above can also be ported to vim so there I can also easily access
either files stemming from the current working directory or to access files
in the home folder globally, inserting one additional line in <code>.vimrc</code>,
inspired by
<a href="https://github.com/junegunn/fzf.vim/issues/251#issuecomment-263042489">a comment from the fzf author</a>:</p>
<pre data-lang="vim" style="background-color:#2b303b;color:#c0c5ce;" class="language-vim "><code class="language-vim" data-lang="vim"><span style="color:#96b5b4;">nmap </span><span><silent> gF :<C-u>Files ~<CR>
</span></code></pre>
<p>So there's already mentioned <code>gf</code> for the local files and now also a <code>gF</code>
for accessing all files in home folder. Enjoy!</p>
<p>This is a 53th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Smart global search for vim and fzf2021-05-01T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/smart-global-search-fzf-vim/<p>In the <a href="/blog/global-search-fzf-vim/">previous post</a> I have outlined that
there are some benefits to having a <code>fzf</code> command search everywhere in the
home folder, instead of just current working directory. Following the setup
there enables for yanking and pasting lines from the files stored in
distant places (in the terms of traversal depth). But there is a
significant cost to this, as now it is even harder to access relevant files
stored near each other as the actual path is cluttering the view and has to
be traversed generally and output lists <em>all</em> the files in the home folder,
even including hidden files (files starting with a dot, dotfiles). Also,
with very high files count, the speed decrease could start becoming a
factor too. This is a very unfortunate scenario, but it can be improved
upon significantly.</p>
<h2 id="ignoring-hidden-files-with-fzf">Ignoring hidden files with fzf</h2>
<p>It is of course possible to
<a href="https://askubuntu.com/a/318211/350681">exclude hidden files from the find command output</a>.
At the same time however, there are search commands that do that
automatically, without too much additional hassle, for instance
<a href="https://github.com/sharkdp/fd">fd</a>. Replacing our three lines to the point
fzf ignores hidden files can be in <code>.zshrc</code> like so:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#b48ead;">export </span><span style="color:#bf616a;">FZF_DEFAULT_COMMAND</span><span>="</span><span style="color:#a3be8c;">fd --type f</span><span>"
</span><span style="color:#b48ead;">export </span><span style="color:#bf616a;">FZF_CTRL_T_COMMAND</span><span>="</span><span style="color:#a3be8c;">fd --type f</span><span>"
</span><span style="color:#b48ead;">export </span><span style="color:#bf616a;">FZF_ALT_C_COMMAND</span><span>="</span><span style="color:#a3be8c;">fd --type f</span><span>"
</span></code></pre>
<p>Making sure we use <code>fd</code> instead of <code>find</code> in our fzf searches we also got
another benefit for free.</p>
<h2 id="excluding-ignored-files">Excluding ignored files</h2>
<p>Apart from ignoring hidden files, <code>fd</code> also respect <code>.gitignore</code> files. So
as a side effect of excluding hidden files from the output we also exclude
ignored files at the same time. This is fortunate as is greatly reduces the
clutter from the folders like <code>node_modules</code>, as they are commonly ignored
in the repositories.</p>
<p>I know some other programming languages apart from node like go or rust all
can create vast folder structures in the home folder and we generally have
no need to edit or view any of the files inside them manually, as they
contain code for packages downloaded from the Internet.</p>
<p>Unfortunately, <code>fd</code> respects the <code>.gitinore</code> only when
<a href="https://github.com/sharkdp/fd/issues/418#issuecomment-470834615">used in an actual git repository</a>.
I am not sure why exactly is it so at this point, but the problem can still
be solved elegantly. Instructing <code>fd</code> to ignore specific folders outside of
the git repository, just mention them in the <code>.fdignore</code>.</p>
<h2 id="common-folders-to-exclude">Common folders to exclude</h2>
<p>So, if you for example use <code>yay</code> command to access software packages from
the AUR, you probably have <code>~/go/</code> folder. A similar story goes for <code>paru</code>,
a successor/spin-off to <code>yay</code>, but this time you could find yourself
wanting to exclude results of <code>~/rust/</code> folder. Now place something
relevant in your home folder's <code>~/.fdignore</code>:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>/go/
</span><span>/rust/
</span></code></pre>
<p>The <code>.fdignore</code> file has the same syntax as <code>.gitignore</code>, meaning that the
trailing slash denotes a folder. The leading slash denotes that only the
entry placed at the top level should be ignored. In this case it's a folder
and another one in the same location as a <code>.fdignore</code> file - in the home
folder. This is useful when there are folders with the same name deeper in
the tree, that we do not want to ignore, for instance <code>~/work/go/</code> would
not be ignored, but <code>~/go/</code> will.</p>
<p>This is a 52th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://github.com/junegunn/fzf#respecting-gitignore">https://github.com/junegunn/fzf#respecting-gitignore</a></li>
<li><a href="https://github.com/sharkdp/fd#excluding-specific-files-or-directories">https://github.com/sharkdp/fd#excluding-specific-files-or-directories</a></li>
</ul>
Global search for vim and fzf2021-04-30T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/global-search-fzf-vim/<p>This post is a continuation of both my
<a href="/blog/mnemonics-outside-vim-setup/">previous post about vim mnemonics</a> and
an another my
<a href="/blog/syncthing-can-sync-entire-phone/">post about Syncting the entire phone</a>.
I have found the combined result of both setups loosely described in those
posts to be an unexpected symbiosis that works very well for editing my
markdown files, especially quick phone notes, TODO lists and shopping
lists. The setup allows me to very quickly edit my notes or lists on the
phone using <a href="https://f-droid.org/en/packages/net.gsantner.markor">Markor</a>
editor, synced it into my laptop via
<a href="https://f-droid.org/en/packages/com.nutomic.syncthingandroid">Syncthing</a>
and then edited there via vim and synced back to the phone automatically.</p>
<p>The setup is also decentralized but also a little harder to initiate, while
at the same time very convenient once up and running, utilizing only the
tools I use daily. The phone allows to quickly write some notes on the go
or follow the lists easily, while the laptop on the other hand allows for
some more complex text editing and the actual writing is much faster,
especially when the fully-fledged physical keyboard is aided with vim.</p>
<p>Even though everything is working nicely and I am satisfied with the
overall workflow, there are some improvements I have found to be increasing
my productivity here even more.</p>
<h2 id="global-home-fzf-search">Global home FZF search</h2>
<p>I know, I currently use <a href="https://sw.kovidgoyal.net/kitty/">kitty</a> as my
terminal emulator and it supports some pretty extreme keyboard-fu, it has a
somewhat steep learning curve. I have already noticed it should be able to
<a href="https://paul-nameless.com/mastering-kitty.html">select some terminal output only with the use of the keyboard</a>,
so I can copy the text without touching the mouse, but frankly, I have no
idea how to do that.</p>
<p>What's more, most of the time I do not even want to copy another program's
output, usually I just need to yank something from one file in vim into
other file, while the files are located on some very different locations in
my home folder. Since the files from the phone created by Markor and synced
in via Syncthing are usually especially deep in the folder structure, I
definitely like to use fuzzy finder to open these files. But the standard
way fzf works is to only search in the current folder.</p>
<p>The way around this is to start the vim in the home folder, or use some
<a href="https://vim.fandom.com/wiki/Set_working_directory_to_the_current_file">built-in vim commands to change directory</a>,
but there is also another way. Make sure to follow the vim setup described
in the post from the top - it's just two lines, but it is important,
especially the alias. Next place
<a href="https://github.com/junegunn/fzf/issues/125#issuecomment-767512970">these three lines</a>
into your <code>.zshrc</code>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#b48ead;">export </span><span style="color:#bf616a;">FZF_DEFAULT_COMMAND</span><span>="</span><span style="color:#a3be8c;">find ~</span><span>"
</span><span style="color:#b48ead;">export </span><span style="color:#bf616a;">FZF_CTRL_T_COMMAND</span><span>="</span><span style="color:#a3be8c;">find ~</span><span>"
</span><span style="color:#b48ead;">export </span><span style="color:#bf616a;">FZF_ALT_C_COMMAND</span><span>="</span><span style="color:#a3be8c;">find ~</span><span>"
</span></code></pre>
<p>With these three lines on top of previous two, every of the four shortcuts
of CTRL+T, ALT+C, <code>fzf</code> and <code>gf</code>, with the last one either in terminal or
as a key sequence in vim normal mode would search the whole home folder.
This setup however brings up some new unpleasant problems. More fixes and
more problems coming, stay tuned.</p>
<p>This is a 51th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Cheatsheet: acme.sh DNS mode2021-04-29T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/cheatsheet-acme-sh-dns/<p>Here's a compilation of useful commands that use a DNS-01 challenge to
issue a certificate using acme.sh client.</p>
<ul>
<li>Issue a certificate using an automatic DNS API mode with GoDaddy:</li>
</ul>
<p><code>acme.sh --issue --dns gnd_gd --domain example.com</code></p>
<ul>
<li>Issue a wildcard certificate (denoted by an asterisk) using an automatic
DNS API mode with Namesilo:</li>
</ul>
<p><code>acme.sh --issue --dns dns_namesilo --domain *.example.com</code></p>
<ul>
<li>Issue a certificate using a DNS alias mode with Cloudflare:</li>
</ul>
<p><code>acme.sh --issue --dns dns_cf --domain example.com --challenge-alias alias-for-example-validation.com</code></p>
<ul>
<li>Issue a certificate using Namecheap DNS API while disabling an automatic
Cloudflare or Google DNS polling after the DNS record is added by
specifying a manual wait time (useful when concerned about privacy):</li>
</ul>
<p><code>acme.sh --issue --dns dns_namecheap --domain example.com --dnssleep 300</code></p>
<ul>
<li>Issue a certificate using a custom DNS API script, placed by default at
<code>/root/.acme.sh/dns_custom.sh</code> (useful with an error
<code>Can not find dns api hook</code> when the API is not yet supported upstream),
see also <a href="/blog/wildcard-certificate-acme-sh/">my other post</a>:</li>
</ul>
<p><code>acme.sh --issue --dns dns_custom --domain example.com</code></p>
<ul>
<li>Issue a certificate using a manual DNS mode, but without an automatic
certificate renewal (make sure to research this method before issuing):</li>
</ul>
<p><code>acme.sh --issue --dns --domain example.com --yes-I-know-dns-manual-mode-enough-go-ahead-please</code></p>
<p>This is a 50th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://github.com/acmesh-official/acme.sh/wiki/DNS-alias-mode">https://github.com/acmesh-official/acme.sh/wiki/DNS-alias-mode</a></li>
<li><a href="https://github.com/acmesh-official/acme.sh/wiki/DNS-manual-mode">https://github.com/acmesh-official/acme.sh/wiki/DNS-manual-mode</a></li>
<li><a href="https://github.com/acmesh-official/acme.sh/wiki/dnsapi">https://github.com/acmesh-official/acme.sh/wiki/dnsapi</a></li>
<li><a href="https://github.com/acmesh-official/acme.sh/wiki/dnssleep">https://github.com/acmesh-official/acme.sh/wiki/dnssleep</a></li>
<li><a href="https://letsencrypt.org/docs/challenge-types/">https://letsencrypt.org/docs/challenge-types/</a></li>
</ul>
A story about NFC on my ThinkPad T4702021-04-28T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/story-about-nfc-thinkpad-t470/<p>A day before I have been
<a href="/blog/gnupg-security-token-arrived/">experimenting with the GnuPG keycard</a>
and found out it is working well with the smart card reader on my trusty
T470.
<a href="https://www.floss-shop.de/en/security-privacy/smartcards/4/openpgp-smart-card-v3.4-mifare-desfire?c=40">The card I have ordered</a>
comes with the Mifare DESFire EV1 compatible RFID/NFC chip inside. Very
plainly, NFC is the RFID with encrypted communication, increasing the
security of the humble RFID. The sad part is that nor the RFID neither the
NFC is in any way connected to the GnuPG circuitry inside the card. It is
basically a keycard that happens to have a NFC compatible tag included in
its form factor. I am not sure why such product even exists, as it cannot
be used for the OpenPGP purposes with a smartphone (readily equipped with
an NFC), only with the computer (some models, including mine have the
smartcard reader available). Maybe it is a precursor for some further
version of this card, where the functionality will be finally fused
together.</p>
<p>Keycard is a good way to store <strong>a copy</strong> of the OpenPGP keys, including
the private one, as that key cannot be retrieved out of it by any
conventional means. Key's readable backup still should be stored somewhere
safe and touched only on a few rare occasions, but keycard is meant to not
pose much threat to the security when lost or stolen, as data on the
keycard are completely lost after inputting any wrong password a few times.
Note that this specific product can do much more than just OpenPGP, for
instance generate TOTP tokens, but for the sake of this article, I am
discussing the OpenPGP functionality exclusively here.</p>
<p>My plan was to use the card on both the laptop and the Android phone, where
OpenKeychain app would read the keys from the card over NFC, so they would
not be stored in the phone's filesystem. Nor the laptop's one, for that
matter, which is good as both of these devices could be lost. I know that
there is a Yubikey NEO that has both the USB and the NFC, so it can work on
both my devices, but I simply like the smartcard form-factor far better.</p>
<h2 id="state-of-the-nfc-with-thinkpads-on-linux">State of the NFC with ThinkPads on Linux</h2>
<p>Somehow I have stumbled upon the
<a href="https://www.reddit.com/r/thinkpad/comments/5v7px0/nfc_on_thinkpad_t470/dg4cinv?utm_source=share&utm_medium=web2x&context=3">thread on Reddit</a>
mentioning some T470 models ship with an NFC module as well. Looking around
on the ArchWiki forum, there is not too much information on using NFC.
Almost all I could find is a note, that NFC on Linux on ThinkPad X1 Carbon
is
<a href="https://forums.lenovo.com/t5/Redhat-Fedora-CentOS/X1-Carbon-Gen8-and-other-models-too-coming-with-Fedora-Linux/m-p/5011378?page=1#5042158">not supported as of August 2020</a>,
which is sad, again, as similar stance probably exists against other
ThinkPad models (NFC is
<a href="http://linux-thinkpad.10952.n7.nabble.com/X240-NFC-td21082.html#a21086">not officially supported on Linux</a>
as there is no user demand). There is one very important thread mentioning
NFC for ThinkPad P52s, however:</p>
<p><a href="https://github.com/nfc-tools/libnfc/issues/455">https://github.com/nfc-tools/libnfc/issues/455</a></p>
<p>The thread discusses specifically the same device my ThinkPad possesses,
the <code>058f:9540 Alcor Micro Corp. AU9540 Smartcard Reader</code>. Yeah, it is
handling the smartcard functionality, but at the same time it appears to
also handle the NFC. Lenovo
<a href="https://download.lenovo.com/pccbbs/mobiles_pdf/t470_hmm_en_sp40m11890_03.pdf">T470 Hardware Maintenance Manual</a>
describes NFC module on the page 58. A page above it also states that the
connectors for a smartcard module is physically next to the NFC module
(connectors 14 and 13 respectively). Their close proximity next to each
other supports the idea that the same Alcor AU9540 chip is handling both.</p>
<h2 id="related-software">Related software</h2>
<p>The thread further mentions commands to interact with the NFC. Specifically
the command <code>nfctool</code> available from the <code>neard</code> service and <code>nfc-list</code>
coming from <code>libnfc</code> package. There is also a loosely somewhat related
<code>libfreefare-git</code> available from AUR, probably worth mentioning.</p>
<p>On my machine, when the <code>pcscd.service</code> is started, <code>nfc-list</code> gives the
following output:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>nfc-list uses libnfc 1.8.0
</span><span>NFC device: Alcor Micro AU9540 00 00 opened
</span></code></pre>
<p>Sadly, no tag was read. Yet since <code>libnfc</code> found some compatible device and
with the knowledge that on Linux it is probably not supported out of the
box, I have also tested booting Windows 10, installing the
<a href="https://pcsupport.lenovo.com/lu/en/products/laptops-and-netbooks/thinkpad-t-series-laptops/thinkpad-t470/downloads/driver-list/component?name=Software%20and%20Utilities">NXP NFC Driver for Windows 10</a>.
All the programs there reported something along the lines, that no NFC
reader could be found.
<a href="https://download.lenovo.com/pccbbs/mobiles_pdf/t470_ug_en.pdf">ThinkPad T470 User Guide</a>
mentions on the page 81 that there is a setting in the BIOS under
<strong>Security</strong> > <strong>I/O Port Access</strong> > NFC device. I had no NFC device in the
list, supporting the idea the module might not be present on my model after
all.</p>
<h2 id="disassembly">Disassembly</h2>
<p>There is no better way of making sure the module is missing, that seeing
with own eyes. Since I was already in the BIOS, I selected the option to
disconnect the internal battery, as this is a recommended step for
servicing.</p>
<p>After opening, I could see the connector belonging to he NFC module is
empty, so no further proof was needed. With the internal battery already
disconnected, I took that battery out as well just to examine the
compartment for the whole NFC part, as it is physically located below the
internal battery.</p>
<h2 id="market-availability-and-the-future">Market availability and the future</h2>
<p>Searching around common electronics supply channels led me to the discovery
of a three-piece set under the label <strong>01AX745</strong>, costing around 35 EUR.
The set contains:</p>
<ol>
<li>A flex cable connecting the module to the motherboard</li>
<li>The NFC module itself</li>
<li>An antenna</li>
</ol>
<p>I consider ordering the whole set, but there are multiple questions worth
discussing before I am sure it is even remotely worth the investment:</p>
<p><strong>Will the module work with Linux?</strong> Some users in the <code>libnfc</code> thread
reported it does. It might require some kernel patching, however. The exact
details are still scarce.</p>
<p><strong>If yes, will it serve me any purpose?</strong> Right now, I can think of some
automation, like do some task when a tag is placed on the laptop. But this
does not appear terribly useful.</p>
<p><strong>Will it communicate with my phone?</strong> Some use cases on the laptop refer
to using the NFC for a communication with a phone. This might be
interesting, but I'm still not sure what kind of data exactly would such
communication transfer. For the files, I have already set up Syncthing. I
have written about it on this blog extensively already, and I am pretty
happy about the setup. It is fast, does not not need a physical proximity,
only Wi-fi. So, using NFC for a file transfer would be useful only when
there is no wireless router around. Given the fact, that a common charging
cable could be used for this, I would not bother with NFC. The maximum data
transfer rate via NFC is also slower than Bluetooth v2, around 2.1Mbit/s.
Not worth the hassle.</p>
<p><strong>Could it be used for security purposes?</strong> File transfer dismissed,
communicating securely with a phone could be used as one of the factors in
Multi-Factor Authentication. I have already
<a href="/blog/are-otp-secrets-stored-plaintext/">written about it a little bit</a> as
well. I could imagine just placing my phone on the laptop, instead of, for
instance re-writing the 6 numbers the phone is displaying into the
computer, as it still quite a norm these days with Timed One-Time Passwords
(TOTP). However, given the fact the NFC driver is not even readily
supported on Linux, combined with the slow adoption rate of NFC on the
laptops overall, I suspect that it would require a great deal of hacking to
pull something like this off. As a side note, I did not do any research in
this area yet, maybe someone has solved it elegantly already.</p>
<p><strong>Could it work with NFC based GnuPG security token?</strong> This is the most
important question to me. There is a
<a href="https://shop.fidesmo.com/products/fidesmo-card">Fidesmo Card</a> and
<a href="https://shop.fidesmo.com/products/fidesmo-card-2-0">Fidesmo Card 2.0</a> (not
sure about the difference at this point). But it is a my beloved smartcard
format. Fidesmo reportedly works with OpenKeychain. If not already done,
porting the code to work on the laptop should not be too hard, if the
actual chipset requirements are met. Using an OpenPGP keycard costing
around 15 EUR on both laptop and a phone at the same time appeals to me.
Not to mention other features such devices offer, including, but not
limited to secure Bitcoin storage, U2F two-factor authentication, PGP email
encryption, secure One-Time Password generation and git commit signing,
<a href="/blog/automatically-signed-github-commits-puzzling/">I have discussed here</a>.</p>
<p>I will do some more research before deciding about ordering, but at this
point I am very excited.</p>
<p>This is a 49th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
GnuPG security token has arrived2021-04-27T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/gnupg-security-token-arrived/<p>I have started learning more about the GnuPG security tokens fairly
recently in the
<a href="/blog/automatically-signed-github-commits-puzzling/">post about GitHub automatic commit signing</a>.
Links in that post provide for some great read too, go check it out if you
are interested. GnuPG security token is a device that stores the keys on
the condition, that the keys cannot be retrieved. This approach is an
alternative to storing the keys, specifically private keys, in the
filesystem. Theft of the device containing private keys that can be
retrieved might have catastrophic consequences. Losing a security token on
the other hand is usually less severe, because token requires a security
pin. If wrong pin is inserted a few times, the data stored there are lost.
There are more overall pros and cons, but there goes the theory in general.</p>
<p>These days, the security tokens are available in many shapes and forms,
from the classic USB dongle to the
<a href="https://fidesmo.com/wearables/">bracelets, pens and watches</a>, with the
smartcard form factor in-between. I have chosen the latter. The GnuPG
security token in the smartcard form factor is also referred to as
<strong>keycard</strong>, I am sticking with that term as well.</p>
<p>I have chosen the keycard because of these factors:</p>
<ul>
<li>My current daily driver, ThinkPad T470 has a smartcard interface</li>
<li>They come with contactless interface, so interaction with a phone is
streamlined</li>
<li>Keycard fits nicely into the wallet among other things in a similar
category, for example credit cards</li>
</ul>
<p>I was not in favor of a token that goes onto my physical keyring. Keys tend
to damage and scratch any plastic gadgets hanging around them. Keycard on
the other hand takes up almost no additional visible space in the wallet.
It also does not create additional attention the same way as an electronic
device among the metal keys does, because it is not visibly exposed. If
someone takes hold of my wallet, I have a problem anyhow. But having just
another card inside a wallet simply does not spark someone else's attention
the way a shiny physical keyring items do. Physical keys are also more
readily shared with a family members than wallets do. The token could be
misrepresented for a humble USB key and inadvertently blocked by a curious
family member. But these all might just be my opinions. Use what suits your
preferences the best.</p>
<p>I have ordered the
<a href="https://www.floss-shop.de/en/security-privacy/smartcards/4/openpgp-smart-card-v3.4-mifare-desfire">OpenPGP Smart Card V3.4 + MiFare DESFire</a>
keycard from Floss-Shop.de. Not entirely mainstream, but looks still quite
popular. It also has Mifare, so it would be ale to interact with my phone's
<a href="https://www.openkeychain.org/">OpenkeyChain</a> via NFC. Or so I thought,
more on that later.</p>
<h2 id="t470-smartcard-interface">T470 smartcard interface</h2>
<p>Following ArchWiki's
<a href="https://wiki.archlinux.org/index.php/GnuPG#Smartcards">GnuPG#Smartcards</a>
recommended following steps:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -S</span><span> pcsclite ccid
</span></code></pre>
<p>Afterwards, start and enable the service:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> systemctl enable pcscd.service</span><span style="color:#bf616a;"> --now
</span></code></pre>
<p>There are also additional commands described in
<a href="https://wiki.archlinux.org/index.php/Smartcards">Smartcards</a> section.
Check the card is accessible:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">gpg --card-status
</span></code></pre>
<p>I have found no problems here. For the record, my laptop's smartcard device
is <code>058f:9540 Alcor Micro Corp. AU9540 Smartcard Reader</code>.</p>
<h2 id="openkeychain-test">OpenKeychain test</h2>
<p>My excitement went down with the phone test. Opening the OpenKeychain app
and sliding the card over it's back side the app responded with the
following:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>Error: Initialization failed!
</span></code></pre>
<p>OpenKeychain as of version 5.7.3 calls for security tokens of Fidesmo,
Yubikey NEO and Sigilance followed by an ellipsis (..., the three dots)
suggesting there are more similar products compatible. I could also not
find a place offering Sigilance products anymore and their domain seems to
be on sale already.</p>
<p>Searching for the error message shown above has led me to
<a href="https://github.com/open-keychain/open-keychain/issues/1471#issuecomment-128115722">thread</a>
specifically discussing the fact, that the product I have bought (and
probably many more) are not compatible with OpenKeychain.</p>
<p>I could have spent a little more time reading the
<a href="https://www.floss-shop.de/en/security-privacy/smartcards/4/openpgp-smart-card-v3.4-mifare-desfire">product description</a>,
as it clearly states the following:</p>
<blockquote>
<p>The OpenPGP function can not be used via NFC / RFID. For this, a chip
card reader for contact-related cards is necessary in any case.</p>
</blockquote>
<p>Shame on me! I will probably have to buy another security token in the
future. For now, I will at least learn how to use this one with all the
underlying concepts until the absolute necessity for having the keys
accessible on the phone arises. Obviously, without resorting to storing the
keys on the phone directly, I do not want to do that. These security tokens
are made for a specific reason after all.</p>
<p>The Mifare interface on the keycard works well. I have tested it with
<a href="https://f-droid.org/en/packages/au.id.micolous.farebot/">Metrodroid</a> app.
However, I have no idea right now how to utilize it. Maybe some useful
ideas come up later. The next step for me is to learn to utilize
<code>gpg --card-edit</code> to make use of the core keycard's features.</p>
<p>This is a 48th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="http://www.g10code.de/p-card.html">http://www.g10code.de/p-card.html</a></li>
<li><a href="https://github.com/drduh/YubiKey-Guide">https://github.com/drduh/YubiKey-Guide</a></li>
<li><a href="https://wiki.debian.org/Smartcards/OpenPGP">https://wiki.debian.org/Smartcards/OpenPGP</a></li>
<li><a href="https://wiki.gnupg.org/SmartCard">https://wiki.gnupg.org/SmartCard</a></li>
</ul>
Nginx on Arch using Ansible pt.32021-04-26T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/nginx-arch-using-ansible-pt3/<p>See also <a href="/blog/nginx-with-acme-sh-arch/">part 1</a> and
<a href="/blog/nginx-arch-using-ansible-pt2/">part 2</a>.</p>
<p>Digging deeper after successfully made use of the template for the
<code>nginx.conf.j2</code> in the previous post I tried to utilize the virtual hosts
template accessible in <code>vhosts.j2</code> by copying the template and referencing
it locally (the thing same I did with with the <code>nginx.conf</code> template
mention above previously) like so:</p>
<pre data-lang="jinja" style="background-color:#2b303b;color:#c0c5ce;" class="language-jinja "><code class="language-jinja" data-lang="jinja"><span>nginx-vhosts:
</span><span> - template: "{{ </span><span style="color:#bf616a;">playbook_dir </span><span>}}/templates/vhost.j2"
</span></code></pre>
<p>This however it resulted in the following error when the playbook is run:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>"msg": "AnsibleUndefinedVariable: 'nginx_listen_ipv6' is undefined"
</span></code></pre>
<p>At the same time I have stumbled upon this quite elaborate
<a href="https://www.reddit.com/r/selfhosted/comments/btdvny/anyone_using_ansible/eoxz4ri?utm_source=share&utm_medium=web2x&context=3">thread</a>
about ansible stating quite amusingly:</p>
<blockquote>
<p><strong>MaxHedrome</strong></p>
<p>I always say the same thing, check out Geeelingguys github. Star and
contribute to his repos</p>
</blockquote>
<p>So, quite naturally,
<a href="https://github.com/geerlingguy/ansible-role-nginx/issues/220">I did</a>.</p>
<h2 id="reasons-for-a-local-template">Reasons for a local template</h2>
<p>I decided to copy and reference the local template again instead of using
the
<a href="https://github.com/geerlingguy/ansible-role-nginx/blob/master/templates/vhost.j2">upstream one</a>
for reasons, some of which are similar to ones I described in the previous
post:</p>
<ul>
<li>The <code>vhost.j2</code> template appears to contain a bug that might not get
patched upstream</li>
<li>It allows for easier customization, when the required variables are not
exposed</li>
<li>Due some changes in ansible, documented template extending seems no
longer
<a href="https://github.com/ansible/ansible/issues/20442#issuecomment-444452824">supported</a></li>
</ul>
<p>I felt like sticking to upstream was not even possible, so the custom local
copy with minor customizations was chosen as a path of least resistance.</p>
<p>This is a 47th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Nginx on Arch using Ansible pt.22021-04-25T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/nginx-arch-using-ansible-pt2/<p>This is an update to the <a href="/blog/nginx-arch-using-ansible/">post</a> I made a
week earlier. Issue got no reaction whatsoever on
<a href="https://github.com/geerlingguy/ansible-role-nginx/issues/219">GitHub</a> so
far.</p>
<p>For short, The original issue described the fact, that running the
associated Ansible Galaxy
<a href="https://galaxy.ansible.com/geerlingguy/nginx/">role</a> would fail on every
subsequent run on Arch based systems, because of the duplicate PID
directive. I was calling for some kind of fix to be adopted upstream.</p>
<p>Although the role has a mature templating mechanism
<a href="https://github.com/geerlingguy/ansible-role-nginx#overriding-configuration-templates">documented</a>
fairly well, and we are highly encouraged to make a proper use of it, I
stated in the post and in the issue thread that it is not sufficient for
overcoming this bug, so I have discussed in the post roughly how the role
can be forked and the template modified.</p>
<h2 id="types-hash-max-size-option">Types hash max size option</h2>
<p>Over time, I have discovered another issue regarding another Nginx
configuration option named <code>types_hash_max_size</code>. The issue is
impersonating itself through Nginx related warning via <code>nginx -t</code> or in the
systemd journal:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>[warn] could not build optimal types_hash, you should increase either types_hash_max_size: 1024 or types_hash_bucket_size: 64; ignoring types_hash_bucket_size
</span></code></pre>
<p>Of course the solution, apart from the fact a warning itself is being
pretty helpful and verbose already, is also documented on the
<a href="https://wiki.archlinux.org/index.php/Nginx#Warning:_Could_not_build_optimal_types_hash">Arch wiki</a>:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>/etc/nginx/nginx.conf
</span></code></pre>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>http {
</span><span> types_hash_max_size 4096;
</span><span> server_names_hash_bucket_size 128;
</span><span> ...
</span><span>}
</span></code></pre>
<p>As with the original pid directive issue, the role templating variables do
not cover solution directly, as there is only
<code>nginx_server_names_hash_bucket_size</code> template variable and no predefined
one for <code>types_hash_max_size</code>. Compared to the problematic pid directive,
there are some important differences:</p>
<ul>
<li>This problem will not prevent subsequent role runs</li>
<li>This problem does not require <em>removal</em> of lines from the template, only
<em>addition</em> (backward compatible)</li>
<li>This problem can be easily solved via <code>nginx_extra_http_options</code> variable</li>
</ul>
<p>Let's explore the third option.</p>
<h2 id="extra-http-options-role-variable">Extra http options role variable</h2>
<p>If not used to jinja2 templating mechanism yet or missed to entry in the
role documentation, adjusting both the hash size and the bucket size via
template variables to match the values recommended above is possible in the
following fashion as a minimal playbook example:</p>
<pre data-lang="jinja" style="background-color:#2b303b;color:#c0c5ce;" class="language-jinja "><code class="language-jinja" data-lang="jinja"><span>---
</span><span>- hosts: my_hosts
</span><span> roles:
</span><span> - { role: geerlingguy.nginx }
</span><span> vars:
</span><span> nginx_conf_template: "{{ </span><span style="color:#bf616a;">playbook_dir </span><span>}}/templates/nginx.conf.j2"
</span><span> nginx_server_names_hash_bucket_size: "128"
</span><span> nginx_extra_http_options: |
</span><span> types_hash_max_size 4096;
</span></code></pre>
<p>The default bucket size here is 64 and a line increasing it to 128 can even
be omitted as it does not prevent any immediate warning, but there could
prevent causing runtime warnings like
<code>client intended to send too large body</code> later.</p>
<p>Also, this setup puts both these options quite apart in the resulting
configuration file, which is not optimal, but with ansible, one should not
really touch the resulting files anyway. Still, they could be placed
together if someone is later reading the file.</p>
<h2 id="why-not-just-edit-already-forked-template">Why not just edit already forked template?</h2>
<p>If I wanted to have both variables placed together in the resulting file, I
would need to edit the template by either removing the
<code>nginx_server_names_hash_bucket_size</code> line entirely or introduce another
variable for <code>types_hash_max_size</code>, because just putting it inside
<code>nginx_extra_http_options</code> obviously results in an emergency:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>[emerg] 6175#6175: "server_names_hash_bucket_size" directive is duplicate in /etc/nginx/nginx.conf:40
</span></code></pre>
<p>However, I decided not to edit the template further. Until there is no
upstream change, I still have to maintain my own forked version of the
<code>templates/nginx.conf.j2</code> referenced in the above playbook example in
<code>nginx_conf_template</code> variable, because of the pid issue. Forking is always
a double-edged sword. It allows to solve a problem at hand immediately, but
at the same time, it requires more work pulling the upstream changes, where
especially important are changes related to bug-fixes.</p>
<p>I decided to keep the changes in the forked template to an absolute minimum
as it increases the changes they get adopted upstream and instead of adding
or removing variables or blocks from forked template, I am using the method
described in the above playbook as is for now.</p>
<p>This is a 46th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
White hat hacker contacted me2021-04-24T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/white-hat-hacker-contacted-me/<p>Today I was contacted by a person claiming to be a white hat hacker,
reporting a vulnerability and hoping for the bounty for his ethical
Disclosure.</p>
<p>I am not a business owner (at least not yet) to be able to pay a bounty
from the revenue and the vulnerability reported looks like something that
even big companies struggle with.</p>
<p>At first I thought it is just your common spam. And it basically is, as I
did not requested such email. But Something got me reading that email for
longer. Finally I got dissuaded from simple act of deleting it, as the
technical details presented in the email were fitting together, including
the steps to reproduce. In fact, the insights were quite valuable.</p>
<p>Very similar topic has been already discussed on an
<a href="https://security.stackexchange.com/questions/203521/how-to-proceed-with-a-white-hat-hacker-claiming-a-vulnerability">Information Security</a>
StackExchange page and I would consider the details there an interesting
read, even without being affected. Users there confirmed my findings about
the real added value of receiving such notice from the ethical hacker.</p>
<p>I think that there will be more and more security related <em>stuff</em> happening
in the future, whenever for an individuals or for businesses. But on the
other hand, it is another thing to keep looking at, scrambling to fit into
our already tight schedules. And I do not think it is easy to find time and
resources to patch security issues as one of our primary tasks, unless we
have already established ourselves on the market quite tightly.</p>
<p>It was easier to ignore security related threats in the past, but with the
recent spike in
<a href="https://www.trendmicro.com/vinfo/us/security/news/cybercrime-and-digital-threats/the-state-of-ransomware-2020-s-catch-22">ransomware</a>
related attacks, the urge to act is only getting more pressing every
passing day.</p>
<p>This is a 45th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Nlbwmon: per-client bandwidth monitor for OpenWRT2021-04-23T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/nlbwmon-per-client-bandwidth-monitor-openwrt/<p>A quick update about my OpenWRT router. I have installed <code>luci-app-nlbwmon</code>
into it. It neatly helps keep track of who consumed which chunk of
bandwidth when using metered connection. This happens usually when
traveling in pair. Sometimes friends pay a visit too. Here's how it looks
from the LuCI:</p>
<p><img src="https://peterbabic.dev/blog/nlbwmon-per-client-bandwidth-monitor-openwrt/luci-app-nlbwmon.png" alt="An example view at the Netlink Bandwidth Monitor in LuCI" /></p>
<p>It also comes with some configuration settings, with one of them
interesting enough to note here:</p>
<blockquote>
<p><strong>Bandwidth Monitor</strong> > <strong>Configuration</strong> > <strong>Advanced Settings</strong> >
Commit interval</p>
</blockquote>
<p>The options available are:</p>
<ul>
<li>24h - least flash wear at the expense of data loss risk</li>
<li>12h - compromise between risk of data loss and flash wear</li>
<li>10m - frequent commits at the expense of flash wear</li>
<li>60s - commit minutely, useful for non-flash storage</li>
</ul>
<p>The first option, 24h is the default one, which makes sense for most
consumer grade routers. These settings imply that when the router is
powered down <em>before</em> data is committed to a storage, they are lost.</p>
<p>The configuration also allows to choose the <em>Database directory</em> in the
same configuration tab. I can imagine that by mounting some NAS over the
network and modifying this path, flash wear on the router could be
prevented while simultaneously mitigating data loss, but I did not try this
idea yet.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://github.com/openwrt/luci/tree/master/applications/luci-app-nlbwmon">https://github.com/openwrt/luci/tree/master/applications/luci-app-nlbwmon</a></li>
<li><a href="https://openwrt.org/docs/guide-user/network/wan/wwan/bandwith_caps_gb_quota">https://openwrt.org/docs/guide-user/network/wan/wwan/bandwith_caps_gb_quota</a></li>
<li><a href="https://openwrt.org/docs/guide-user/services/network_monitoring/bwmon">https://openwrt.org/docs/guide-user/services/network_monitoring/bwmon</a></li>
<li><a href="https://openwrt.org/packages/pkgdata/luci-app-nlbwmon">https://openwrt.org/packages/pkgdata/luci-app-nlbwmon</a></li>
</ul>
<p>This is a 44th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Comprehensive guide to pkgfile2021-04-22T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/comprehensive-guide-pkgfile/<p>Let's consider some examples of how to get the package file names and paths
on Arch based system. Let's start with the command that is a
bread-and-butter, pacman package manager.</p>
<h2 id="determine-which-package-s-own-a-file">Determine which package(s) own a file</h2>
<p>To find out which package owns a file, two very similar approaches can be
applied. The first one assumes the presence of the up-to-date database via
<code>pacman -Fy</code>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pacman -F</span><span> path/to/package/file </span><span style="color:#65737e;"># pacman --files
</span></code></pre>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>path/to/package/file is owned by repository/package-name 1.2.3-1
</span><span>path/to/package/file is owned by repository/another-package 2.3.4-2
</span></code></pre>
<p>The second one, as with other <code>-Q</code> related parameters, requires the package
and the queried file to be installed on the system:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pacman -Qo</span><span> /path/to/package/file </span><span style="color:#65737e;"># pacman --query --owns
</span></code></pre>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>/path/to/package/file is owned by package-name 1.2.3-1
</span></code></pre>
<p>The output difference concerning the leading slash denoting a fully
functional absolute path with the <code>-Qo</code> option is present here as well.
Subsequently, the <code>-Qo</code> option only accepts the full path to the queried
file and only returns a single matching locally installed package, if
present.</p>
<p>The <code>-F</code> option on the other hand accepts either a file name or the full
path to the file. It returns all the matching packages, also denoting which
package repository the file in question belongs to, like core, extra or
community. Additionally it also informs the user if the particular package
is installed on the system already and resorts to prettier output
formatting, when only the file name is supplied.</p>
<h2 id="listing-files-in-a-package">Listing files in a package</h2>
<p>Again, two similar approaches can be applied utilizing pacman. The first
one is assuming the database is already up-to-date via <code>pacman -Fy</code>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pacman -Fl</span><span> package </span><span style="color:#65737e;"># pacman --files --list
</span></code></pre>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>package path/to/package/file1
</span><span>package path/to/package/file2
</span></code></pre>
<p>And the second one is assuming the package is installed on the system:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pacman -Ql</span><span> package </span><span style="color:#65737e;"># pacman --query --list
</span></code></pre>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>package /path/to/package/file1
</span><span>package /path/to/package/file2
</span></code></pre>
<p>Both outputs are comparably similar with the subtle difference, which is a
leading slash. The <code>-Ql</code> option outputs it, while the <code>-Fl</code> doesn't. Don't
ask me why, I have yet to find out.</p>
<h3 id="my-aliases">My aliases</h3>
<p>I felt so confident in the pacman options and used them so often that they
gradually find place within my aliases:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#96b5b4;">alias </span><span style="color:#8fa1b3;">pF</span><span>='</span><span style="color:#a3be8c;">pacman -F</span><span>'
</span><span style="color:#96b5b4;">alias </span><span style="color:#8fa1b3;">Fl</span><span>='</span><span style="color:#a3be8c;">pacman -Fl</span><span>'
</span><span style="color:#96b5b4;">alias </span><span style="color:#8fa1b3;">Ql</span><span>='</span><span style="color:#a3be8c;">pacman -Ql</span><span>'
</span><span style="color:#96b5b4;">alias </span><span style="color:#8fa1b3;">Qo</span><span>='</span><span style="color:#a3be8c;">pacman -Qo</span><span>'
</span></code></pre>
<p>I think you have some pacman, or even yay related aliases defined as well,
but it is mostly a matter of personal preference.</p>
<h2 id="listing-executables-of-a-package">Listing executables of a package</h2>
<p>On some occasions, even more useful than listing all the files the package
provides is to list its executable files only. Utilizing the above, there
are again at least two options that I find myself using regularly,
depending on the conditions already described. Using either files database:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pacman -Fl</span><span> package | </span><span style="color:#bf616a;">grep</span><span> bin
</span></code></pre>
<p>Or querying the local packages:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pacman -Ql</span><span> package | </span><span style="color:#bf616a;">grep</span><span> bin
</span></code></pre>
<p>However, these two approaches are not ideal, as it also lists directories
or non-executable files containing <code>bin</code> substring anywhere in the path.
This could be mitigated by some more bash-fu hacking, but for quick
probing, simple grepping was sufficient for all my use cases so far.</p>
<h2 id="there-s-a-another-way">There's a another way</h2>
<p>The above examples felt robust enough I've thought that I have pretty much
nailed them, until I have stumbled across a
<a href="https://bbs.archlinux.org/viewtopic.php?pid=1074282#p1074282">comment</a> on
the Arch Linux forum, that changed my perspective. The humble post states:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pkgfile</span><span> netstat
</span></code></pre>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>core/net-tools
</span></code></pre>
<p>For me this was a revelation, as I was not aware of the fact such a tool is
available. I instantly tried to use the <strong>pkgfile</strong> command, but it was not
present on the system. Armed with the knowledge already presented in the
previous sections, it was not a problem to find out how to obtain it:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pacman -F</span><span> pkgfile
</span></code></pre>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>extra/pkgfile 21-2
</span><span> usr/bin/pkgfile
</span><span> usr/share/bash-completion/completions/pkgfile
</span></code></pre>
<p>Yeah, the package name is the name of the executable. Very common, but not
always the case. I could have just tried to install it straight away:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pacman -S</span><span> pkgfile
</span></code></pre>
<p>Let's explore its options.</p>
<h2 id="enter-pkgfile">Enter pkgfile</h2>
<p>Learning from the manual now available via <code>man pkgfile</code> as the package is
present offers alternatives to the mentioned pacman options:</p>
<ul>
<li><code>pkgfile -u</code> or <code>pkgfile --update</code> is equivalent to <code>pacman -Fy</code></li>
<li><code>pkgfile -s</code> or <code>pkgfile --search</code> or simply <code>pkgfile</code> is equivalent to
<code>pacman -F</code></li>
<li><code>pkgfile -l</code> or <code>pkgfile --list</code> is the equivalent to <code>pacman -Fl</code></li>
</ul>
<p>The difference is that the standard output of pkgfile is far less verbose,
mostly omitting any natural language words, which is not a problem as the
relevant data are printed out, straight to the point. Some details, such as
package versions can be included using <code>--verbose</code> option, as is usually
the norm.</p>
<p>What's more, pkgfile provides some neat options not previously known to me
to be readily available, greatly mitigating the problems described in
previous chapter discussing listing binaries. To list files provided by a
package located only within <code>bin</code> and <code>sbin</code> folders run:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pkgfile -lb</span><span> package </span><span style="color:#65737e;"># pkgfile --list --binaries
</span></code></pre>
<p>The directories are omitted he by default. To list them instead run:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pkgfile -ld</span><span> package </span><span style="color:#65737e;"># pkgfile --list --directories
</span></code></pre>
<p>I consider learning pkgfile to be a great addition to my system management
toolbelt. I have also found that pkgfile is little bit faster than pacman
in this respect, almost instant. Feel free to experiment yourself.</p>
<p>This is a 43th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://man.archlinux.org/man/core/pacman/pacman.8.en">https://man.archlinux.org/man/core/pacman/pacman.8.en</a></li>
<li><a href="https://man.archlinux.org/man/pkgfile.1">https://man.archlinux.org/man/pkgfile.1</a></li>
<li><a href="https://wiki.archlinux.org/index.php/Pkgfile">https://wiki.archlinux.org/index.php/Pkgfile</a></li>
</ul>
Wildcard certificate with acme.sh2021-04-21T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/wildcard-certificate-acme-sh/<p>This post is a sequel to my
<a href="/blog/nginx-with-acme-sh-arch/">previous post</a>. The post demonstrated how
to setup HTTPS for Nginx by obtaining a certificate via 3rd party client
called acme.sh. There is also some basic underlying theory about these
terms. Consider reading it if feeling uncertain.</p>
<p>Start by creating a <em>wildcard</em> DNS type A record by entering an asterisk
(*) in the place of a subdomain. Let's consider domain <strong>example.com</strong>
again, the record should hold <strong>*.example.com</strong> value.</p>
<p>Trying a wildcard with ALPN mode:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">acme.sh --issue --alpn -d </span><span>"</span><span style="color:#a3be8c;">*.example.com</span><span>"
</span></code></pre>
<p>Ends up with the error message:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>The supported validation types are: dns-01, but you specified: tls-alpn-01
</span></code></pre>
<p>We know that <strong>tls-alpn-01</strong> is the ALPN mode. What's the meaning behind
the <strong>dns-01</strong> mode?</p>
<h2 id="dns-01-challenge">DNS-01 challenge</h2>
<p>There's a reason why acme.sh complains about unsupported validation type. A
validation type is defined as a <em>challenge</em> in the ACME standard. In
acme.sh documentation it is referred to as <em>mode</em>. The reason is that ALPN
(or standalone, or webroot, or even Nginx/Apache) mode works by proving we
have control over the host by doing a temporary changes on it, that can be
securely verified from the outside. By outside in this scenario means by
LetsEncrypt. LetsEncrypt, by performing this verification, has as proof
that we are in fact in control of the domain the certificate is issued to.</p>
<p>Now the DNS-01 challenge does this slightly differently. It requires adding
a TXT record to the domain. During the challenge, the TXT record is read by
LetsEncrypt (if it had enough time to propagate) and if correct, the
certificate is issued.</p>
<h2 id="using-dns-api-can-be-dangerous">Using DNS API can be dangerous</h2>
<p>The above step can be automated as any domain registrars today provide an
API access to manipulate the domain records on their nameservers
programmatically. Before continuing further, make sure you understand the
risks involved.</p>
<p><strong>Warning:</strong> depending on your DNS provider, it can be incredibly dangerous
to automate LetsEncrypt renewal via DNS-01 challenges, as the API keys must
be available in plaintext and most providers offer too much control via
their APIs. A compromised machine could result in all host records being
changed, or (with some providers) a change in domain registrant details or
even an outright domain transfer.</p>
<p>Ways to mitigate this are:</p>
<ul>
<li>Do not store the auth token, and trigger the renewal manually.</li>
<li>Run the renewal on a machine that is not on the public Internet, and
SFTP/SCP the certificates onto your server.</li>
<li>Run an instance of acme-dns, delegate your _acme-challenge to it, and
automate the process with that.</li>
</ul>
<p>Consider yourself warned and avoid keeping this mode unmitigated/automated
on the business critical services.</p>
<h2 id="wildcard-dns-api-mode">Wildcard DNS API mode</h2>
<p>We use <a href="https://porkbun.com/api/json/v3/documentation">porkbun.com API</a> for
this example:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#b48ead;">export </span><span style="color:#bf616a;">PORKBUN_API_KEY</span><span>="</span><span style="color:#a3be8c;">...</span><span>"
</span><span style="color:#b48ead;">export </span><span style="color:#bf616a;">PORKBUN_SECRET_API_KEY</span><span>="</span><span style="color:#a3be8c;">...</span><span>"
</span><span style="color:#bf616a;">acme.sh --issue --dns</span><span> dns_porkbun</span><span style="color:#bf616a;"> -d </span><span>"</span><span style="color:#a3be8c;">*.example.com</span><span>"
</span></code></pre>
<p>If there is an error stating that the hook is not available (because it was
not included in the package for instance):</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>Can not find dns api hook for: dns_porkbun
</span></code></pre>
<p>Try downloading the required hook from the master branch into
<code>/root/.acme.sh</code>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">wget -P</span><span> /root/.acme.sh/ https://raw.githubusercontent.com/acmesh-official/acme.sh/master/dnsapi/dns_porkbun.sh
</span></code></pre>
<p><strong>Tip:</strong> the API keys are stored in the <code>.acme.sh/account.conf</code>, should the
need to delete them arises. Consider also revoking the keys and disabling
the API access as safer options, as once they keys are exposed, there is
very little guarantee that deleting them solves the problem.</p>
<p>Remaining steps of the setup are identical with the setup mentioned in the
post at the beginning.</p>
<p>This is a 42th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://community.letsencrypt.org/t/wildcard-domain-step-by-step/58250/5">https://community.letsencrypt.org/t/wildcard-domain-step-by-step/58250/5</a></li>
<li><a href="https://github.com/acmesh-official/acme.sh/issues/1571#issuecomment-384999814">https://github.com/acmesh-official/acme.sh/issues/1571#issuecomment-384999814</a></li>
<li><a href="https://github.com/acmesh-official/acme.sh/wiki/dnsapi#130-using-the-porkbun-api">https://github.com/acmesh-official/acme.sh/wiki/dnsapi#130-using-the-porkbun-api</a></li>
<li><a href="https://kb.porkbun.com/article/94-what-is-a-wildcard-dns-record">https://kb.porkbun.com/article/94-what-is-a-wildcard-dns-record</a></li>
<li><a href="https://kb.virtubox.net/knowledgebase/how-to-issue-wildcard-ssl-certificate-with-acme-sh-nginx/">https://kb.virtubox.net/knowledgebase/how-to-issue-wildcard-ssl-certificate-with-acme-sh-nginx/</a></li>
<li><a href="https://letsencrypt.org/docs/challenge-types/#tls-alpn-01">https://letsencrypt.org/docs/challenge-types/#tls-alpn-01</a></li>
</ul>
Nginx with acme.sh on Arch2021-04-20T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/nginx-with-acme-sh-arch/<p>Modern Internet is full of encryption. In many ways, using encryption is
still optional, although non-encrypted communication of any form is getting
rarer every day. There are factors that contribute to this trend. As a
specific example, some top-level domains, like <code>.app</code> or <code>.dev</code>, the
mainstream browsers ship with the forced HTTPS mode hard-coded for these
selected TLDs.</p>
<p>To make HTTPS work, a browser and a webserver first need to perform a key
exchange. This exchange is referred to as a TLS handshake. After a
successful handshake, the browser and the webserver can communicate
securely, meaning anyone eavesdropping on the communication can only see
garbage, unless they can actually decrypt the communication.</p>
<p>For a webserver to be able to perform the TLS handshake, it needs a
certificate, which is used for a public key encryption. The certificate was
traditionally bought from a Certificate Authority (CA), until the
non-profit CA called LetsEncrypt started providing certificates for free,
so we all can have nice things (secure communication with the webserver).</p>
<h2 id="acme-and-certbot">ACME and Certbot</h2>
<p>ACME stands for Automated Certificate Management Environment and provides a
protocol enabling any webserver sitting under an actual domain name to
obtain the certificate from LetsEncrypt at no cost.</p>
<p>The official client implementing the ACME protocol is called Certbot and is
written in Python. It's a powerful client, but it has it's share of issues
as well. Because it is a sort of a swiss-knife, it tries to handle many
tasks. By it's nature, it is a little bit heavy on the dependencies. I
specifically do not like it adds lines into Nginx configuration files by
default. Another problem I had was on Ubuntu machine. When 20.04 came out,
the repositories was slower to catch up and I had to do manual patches of
the certbot's code, which is not a pleasant experience. This is also the
reason I am experimenting with Arch as a server.</p>
<p>Certbot is not the only available client speaking the ACME protocol. Heck,
the ACME protocol is available as
<a href="https://tools.ietf.org/html/rfc8555">RFC8555</a> and anyone can even obtain
the certificate from LetsEncrypt manually by following it. There are also
many
<a href="https://letsencrypt.org/docs/client-options/#other-client-options">3rd party clients</a>
that automate the process available already.</p>
<h2 id="enter-acme-sh">Enter acme.sh</h2>
<p>One of such clients is called acme.sh an as it's name suggest is a Shell
script with (almost) no dependencies. This fact alleviates the problem of
slow repository update almost entirely, because one can always just use git
to obtain the latest version, regardless of where the host operating system
repositories do. Acme.sh page cites:</p>
<blockquote>
<p>It's probably the easiest & smartest shell script to automatically issue
& renew the free certificates from Let's Encrypt.</p>
</blockquote>
<p>Let's see if this statement holds onto it's message.</p>
<h2 id="setup">Setup</h2>
<p>Start by setting-up the DNS record type A or CNAME for <code>sub.example.com</code>
pointing to the public IP address of the host where these steps are going
to be applied. DNS records can be set any time, but it can take time till
nameservers to propagate the changes, so it is better to do it first.</p>
<p>This guide <strong>assumes becoming a superuser</strong>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">su</span><span> -
</span></code></pre>
<p>Install acme.sh and Nginx, or alternatively <code>nginx-mainline</code>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pacman -S --needed</span><span> acme.sh nginx
</span></code></pre>
<p>Make sure there is nothing listening on port 443 used for HTTPS:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">ss -tuna </span><span>| </span><span style="color:#bf616a;">grep</span><span> :443
</span></code></pre>
<p>If there is something running there already, stop it.</p>
<h3 id="issue-the-certificate">Issue the certificate</h3>
<p>The next step makes use of the Application-Layer Protocol Negotiation
(ALPN), which is the initial part of the TLS handshake mentioned above.
Acme.sh is capable of issuing a certificate using ALPN mode. The
certificates are installed into <code>/root/.acme.sh/sub.example.com/</code>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">acme.sh --issue --alpn -d</span><span> sub.example.com
</span></code></pre>
<p>Now pick or choose a location where you put your certificates, examples:</p>
<ul>
<li><code>/etc/ssl/certs</code> is used by OpenSSL</li>
<li><code>/etc/letsencrypt/live</code> is used by Certbot</li>
<li><code>/etc/nginx/ssl</code> preferred by some users</li>
</ul>
<p>If the certs are being used solely by Nginx, the <code>/etc/nginx/ssl</code> is a good
choice:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">mkdir</span><span> /etc/nginx/ssl
</span></code></pre>
<p>It is important to set the right permissions for this folder to protect the
private key:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">chmod</span><span> 700 /etc/nginx/ssl
</span></code></pre>
<p>The folder has to be owned by the root user.</p>
<h3 id="configure-nginx">Configure Nginx</h3>
<p>Now copy the generated certificates there, pay attention to <code>reloadcmd</code>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">acme.sh --install-cert -d</span><span> sub.example.com \
</span><span style="color:#bf616a;"> --key-file </span><span>'</span><span style="color:#a3be8c;">/etc/nginx/ssl/sub.example.com.key</span><span>' \
</span><span style="color:#bf616a;"> --fullchain-file </span><span>'</span><span style="color:#a3be8c;">/etc/nginx/ssl/sub.example.com.cer</span><span>' \
</span><span style="color:#bf616a;"> --reloadcmd </span><span>"</span><span style="color:#a3be8c;">systemctl force-reload nginx</span><span>"
</span></code></pre>
<p><strong>Important:</strong> make sure to check the permissions. The <code>/etc/nginx/ssl</code>
folder should have 700, <code>.cer</code> files should have 644 and <code>.key</code> file should
have 600. Everything should be owned by root. Note that the ownership and
permissions are preserved automatically when the certificates are renewed.</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">find</span><span> /etc/nginx/ssl</span><span style="color:#bf616a;"> -printf </span><span>"</span><span style="color:#a3be8c;">%m %f\n</span><span>"
</span></code></pre>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>700 ssl
</span><span>600 sub.example.com.key
</span><span>644 sub.example.com.cer
</span></code></pre>
<p>Add the relevant data under the <code>server</code> block in the Nginx config. Not all
configuration directives are offered in the example below, just the most
relevant ones. Consider consulting the Nginx
<a href="http://nginx.org/en/docs/http/configuring_https_servers.html">documentation</a>
on HTTPS.</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>server {
</span><span> listen 443 ssl;
</span><span> server_name sub.example.com;
</span><span> ssl_certificate /etc/nginx/ssl/sub.example.com.cer;
</span><span> ssl_certificate_key /etc/nginx/ssl/sub.example.com.key;
</span><span> ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
</span><span> ssl_ciphers HIGH:!aNULL:!MD5;
</span><span>}
</span></code></pre>
<p>Lastly, start and enable Nginx service:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">systemctl</span><span> enable nginx.service</span><span style="color:#bf616a;"> --now
</span></code></pre>
<p>Now access the page via the browser to check if HTTPS is working.</p>
<p><strong>Note:</strong> when HTTPS served via Nginx works, consider switching to
obtaining the certificate via Nginx mode, because certificate renewal via
ALPN will not work anymore as Nginx is already listening on the port 433.
The reason it was used in the first place is that it does not require any
dependencies as opposed to standalone mode requiring socat). The second
reason is that the Nginx validation mode may require a working Nginx
configuration in the first place, so it is better to start safe.</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">acme.sh --issue --nginx -d</span><span> sub.example.com
</span></code></pre>
<p>This is a 41th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://en.wikipedia.org/wiki/Application-Layer_Protocol_Negotiation">https://en.wikipedia.org/wiki/Application-Layer_Protocol_Negotiation</a></li>
<li><a href="https://github.com/acmesh-official/acme.sh">https://github.com/acmesh-official/acme.sh</a></li>
<li><a href="https://serverfault.com/a/216480/505241">https://serverfault.com/a/216480/505241</a></li>
<li><a href="https://serverfault.com/a/259307/505241">https://serverfault.com/a/259307/505241</a></li>
<li><a href="https://stackoverflow.com/a/1796163/1972509">https://stackoverflow.com/a/1796163/1972509</a></li>
<li><a href="https://wiki.archlinux.org/index.php/Acme.sh">https://wiki.archlinux.org/index.php/Acme.sh</a></li>
<li><a href="https://www.howtoforge.com/getting-started-with-acmesh-lets-encrypt-client/">https://www.howtoforge.com/getting-started-with-acmesh-lets-encrypt-client/</a></li>
</ul>
Keep Git fork in sync2021-04-19T00:00:00+00:002021-06-26T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/keep-git-fork-sync/<p>Steps below explain to keep the forked version up to date with the upstream
branch of a forked repository. I know this was already documented many
times, but I was struggling with it for some time, until I have found the
workflow that suits me the best, so I documented it.</p>
<p>Create a fork in the UI, clone the forked repository and change directory:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> clone</span><span style="color:#bf616a;"> --recurse-submodules</span><span> git@github.com:peterbabic/a-forked-repository.git
</span><span style="color:#96b5b4;">cd</span><span> forked-repository
</span></code></pre>
<p>Add the upstream remote, do this only if remote is not already present:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> remote add upstream git@github.com:ORIGINAL-ACCOUNT/repository.git
</span></code></pre>
<p>Get back to the main branch, if not already there:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> checkout main
</span></code></pre>
<p>Fetch latest changes and add them to the repository:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> pull</span><span style="color:#bf616a;"> --rebase</span><span> upstream main
</span></code></pre>
<p>This command can however fail with the following error:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>CONFLICT (add/add): Merge conflict in ...
</span><span>error: could not apply 11fa9d20...
</span><span>Resolve all conflicts manually, mark them as resolved with
</span><span>"git add/rm <conflicted_files>", then run "git rebase --continue".
</span><span>You can instead skip this commit: run "git rebase --skip".
</span><span>To abort and get back to the state before "git rebase", run "git rebase --abort".
</span><span>Could not apply 11fa9d20...
</span></code></pre>
<p>The above means that there are local unsynchronized changes, that could be
introduced via GitHub GUI for instance. To get over them, first abort the
rebase:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> rebase</span><span style="color:#bf616a;"> --abort
</span></code></pre>
<p>And then run the rebase again specifying strategy:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> pull</span><span style="color:#bf616a;"> --rebase -s</span><span> recursive</span><span style="color:#bf616a;"> -X</span><span> ours upstream main
</span></code></pre>
<p>This command replaces local conflicting files with the files from the
upstream <strong>without asking</strong>, meaning it is potentially dangerous. What is
left is to push the changes back into the origin (the fork):</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> push</span><span style="color:#bf616a;"> --force-with-lease</span><span> origin main
</span></code></pre>
<p>Repeat <strong>last three commands</strong> to keep the forked repository updated.</p>
<p><strong>Tip:</strong> It is possible to skip writing <code>--rebase</code> with this setting:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> config</span><span style="color:#bf616a;"> --global</span><span> pull.rebase true
</span></code></pre>
<p>For the completeness, here's how remotes should look like:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> remote</span><span style="color:#bf616a;"> --verbose
</span></code></pre>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>origin git@github.com:peterbabic/a-forked-repository.git (fetch)
</span><span>origin git@github.com:peterbabic/a-forked-repository.git (push)
</span><span>upstream git@github.com:ORIGINAL-ACCOUNT/repository.git (fetch)
</span><span>upstream git@github.com:ORIGINAL-ACCOUNT/repository.git (push)
</span></code></pre>
<p>This is a 40th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://github.com/tldr-pages/tldr/pull/5526#issuecomment-808965434">https://github.com/tldr-pages/tldr/pull/5526#issuecomment-808965434</a></li>
<li><a href="https://stackoverflow.com/a/44491614/1972509">https://stackoverflow.com/a/44491614/1972509</a></li>
<li><a href="https://stefanbauer.me/articles/how-to-keep-your-git-fork-up-to-date">https://stefanbauer.me/articles/how-to-keep-your-git-fork-up-to-date</a></li>
<li><a href="https://git-scm.com/docs/git-pull#Documentation/git-pull.txt---rebasefalsetruemergespreserveinteractive">https://git-scm.com/docs/git-pull#Documentation/git-pull.txt---rebasefalsetruemergespreserveinteractive</a></li>
<li><a href="https://sdqweb.ipd.kit.edu/wiki/Git_pull_--rebase_vs._--merge">https://sdqweb.ipd.kit.edu/wiki/Git_pull_--rebase_vs._--merge</a></li>
<li><a href="https://stackoverflow.com/a/3443225/1972509">https://stackoverflow.com/a/3443225/1972509</a></li>
</ul>
Nginx on Arch using Ansible2021-04-18T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/nginx-arch-using-ansible/<p>Since Arch, Nginx and Ansible are already all pretty mature tools by
themselves, I though that installing Nginx on Arch system using Ansible
playbook would be a matter of seconds. But judging from an actual
experience, it is a little bit harder.</p>
<p>The official
<a href="https://github.com/nginxinc/ansible-role-nginx">ansible nginx role</a> does
not support Arch. Then there is a
<a href="https://github.com/geerlingguy/ansible-role-nginx">role</a> made by Jeff
Geerling, which does support Arch, but with a quirk. The role does run
sometimes (I was not able to isolate 100%, usually after completely
removing Nginx package), but then won't run again.</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>TASK [geerlingguy.nginx : Ensure nginx service is running as configured.] ***************
</span><span>fatal: [5.189.129.182]: FAILED! => {"changed": false, "msg": "Unable to start service nginx: Job for nginx.service failed because the control process exited with error code.\nSee \"systemctl status nginx.service\" and \"journalctl -xeu nginx.service\" for details.\n"}
</span></code></pre>
<p>The reason is that Nginx won't restart with a provided <code>nginx.conf</code>
template.</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> systemctl restart nginx </span><span style="color:#65737e;"># fail
</span><span style="color:#bf616a;">sudo</span><span> journalctl</span><span style="color:#bf616a;"> -xeu</span><span> nginx
</span></code></pre>
<p>The journal contains:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>[emerg] 870#870: "pid" directive is duplicate in /etc/nginx/nginx.conf:4
</span></code></pre>
<p>When commenting the line 4 Nginx restarts flawlessly.
https://github.com/geerlingguy/ansible-role-nginx/blob/1820e90b4cf7248f0914983beeb785bf15bb0571/templates/nginx.conf.j2#L4</p>
<p>The reason for this is that PID is also referenced in
<code>/usr/lib/systemd/system/nginx.service</code> as a line:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">ExecStart</span><span>=</span><span style="color:#a3be8c;">/usr/bin/nginx </span><span style="color:#bf616a;">-g </span><span>'</span><span style="color:#a3be8c;">pid /run/nginx.pid; error_log stderr;</span><span>'
</span></code></pre>
<p>If I understand it correctly, there is no need for L4 in <code>nginx.conf</code> when
managing Arch. More relevant information at
<a href="https://bugs.archlinux.org/task/46500">https://bugs.archlinux.org/task/46500</a></p>
<p>https://github.com/geerlingguy/ansible-role-nginx/blob/1820e90b4cf7248f0914983beeb785bf15bb0571/tasks/main.yml#L27-L37</p>
<p>The <code>setup-Archlinux.yml</code> is run <em>before</em> <code>nginx.conf</code> template is copied
over. Should there be another Arch related task below the after "Copy" but
before "Ensure running", or perhaps should there be an Arch specific
template for <code>nginx.conf</code> file?</p>
<p>https://github.com/geerlingguy/ansible-role-nginx/blob/1820e90b4cf7248f0914983beeb785bf15bb0571/templates/nginx.conf.j2#L1-L6
Templating is impractical here, as L3-L4 lines are the <em>only</em> lines that
are not inside a block, so I propose to put them in a in a jinja2 block as
well as a path of least resistance. Not sure about the L1 (not in a block
but no issue reported).</p>
<h2 id="solution">Solution</h2>
<ol>
<li>Edit <code>vars/Archlinux.yml</code> and <strong>remove</strong> L6 containing
<code>nginx_pidfile: /run/nginx.pid</code></li>
<li>Edit <code>templates/nginx.conf.j2</code> and put L5 into the conditional block:</li>
</ol>
<pre data-lang="jinja" style="background-color:#2b303b;color:#c0c5ce;" class="language-jinja "><code class="language-jinja" data-lang="jinja"><span>{% </span><span style="color:#b48ead;">if </span><span style="color:#bf616a;">nginx_pidfile </span><span style="color:#b48ead;">is </span><span style="color:#bf616a;">defined </span><span>%}
</span><span>pid {{ </span><span style="color:#bf616a;">nginx_pidfile </span><span>}};
</span><span>{% </span><span style="color:#b48ead;">endif </span><span>%}
</span></code></pre>
<p>I am not sure if this is the best solution, but proves as proof of concept,
until the upstream provides a solution.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://github.com/geerlingguy/ansible-role-nginx/issues/219">https://github.com/geerlingguy/ansible-role-nginx/issues/219</a></li>
</ul>
<p>This is a 39th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Automatically signed GitHub commits are puzzling2021-04-17T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/automatically-signed-github-commits-puzzling/<p>I wanted to finally start getting into signing my commits, mainly because
among any other reasons, it increases the overall confidence in my work.
With the GitHub's decision to display a yellow warning stating <em>Unverified</em>
near the commit list, the trend towards signing will almost without a doubt
only continue.</p>
<p>This situation can, and maybe even will, evolve to a point, where only
signed commits will be generally accepted, wherever more than one people is
working with a repository, and that is a good thing. Properly signed work
greatly decreases chances that the malicious code will slip in, if nothing
else.</p>
<p>The problem lies in the word <em>properly</em>. There is a clear way stating how
to <em>properly</em> sign commits. There is however no clear way, when we move one
step further: How to handle signing keys <em>properly</em>?</p>
<h2 id="handling-gnupg-keys">Handling GnuPG keys</h2>
<p>There is a ton of material available and I have read quite a good deal of
it recently. All this has led me to two conclusion: The whole PGP scene
(including OpenPGP and GnuPG) is a mess we probably cannot live without.</p>
<p>At first I thought I did not adapt the <code>gpg</code> workflow sooner because there
simply was no pressing need to. But there are actually two scenarios of not
learning how to use a tool or a workflow, otherwise used regularly and also
considered important by other people doing the same thing:</p>
<ol>
<li>Not learning because of no pressing need, as mentioned</li>
<li>Not learning because the tool does not need learning, unless doing
something specific</li>
</ol>
<p>As an example of the second scenario I would mention web bundlers category.
Here, tools like Webpack, Rollup, Parcel or some of the new players like
Snowpack, Vite or ESbuild.</p>
<p>Too many people use these tools without ever learning how to use them, as
they come configured by someone else already and for the purposes at hand
they usually just work. This argument can be applied to many other things,
such as using an IDE prevents learning how to use git commands, or even the
argument's simpler form stating that using GUI prevents learning any
underlying commands.</p>
<h2 id="github-signing-commits-automatically">GitHub signing commits automatically</h2>
<p>The moment I was left somehow puzzled was when I have found out that when
adding commits via GitHub web interface, they get <em>Verified</em> automatically.
It caught me by surprise, because I knew I did not touch any PGP related
fields in the GitHub interface, so I did not know how could they become
signed. As a side note, I use primarily my self-hosted Gitea server and I
am pretty satisfied with it and that's why I do not keep up with all the
changes GitHub is rolling out.</p>
<p>I am not entirely sure how does GitHub do the automatic commit signing as I
have not read anything about that mechanism yet. <del>It is probably based on
the assumption that I am logged in securely, my email address is verified
and other similar pieces of information.</del> This <em>feature</em> reveals itself in
a strange light when working for instance on a collaborative Pull-Request
and pushing separate commits via both channels: GitHub web interface and
<code>git push</code> to my forked repository branch, tracked by that Pull-Request.</p>
<p>When looking at the commits made like this, it promotes chaos. The commits
come from the same person, but some are verified and some are unverified. I
believe, that such picture, with the words Verified and Unverified
alternating furiously reduces the confidence in the code I push. This can
also happen when someone does all the signing entirely manually and
sometimes forgets to sign, but in this scenario, is the user's choice to
adapt a workflow where they need to keep track of certain things (not
forget to sign each commit manually). In the former case, GitHub does some
decision making here instead.</p>
<h2 id="should-everything-be-signed">Should everything be signed?</h2>
<p>Consider a hypothetical scenario, where GitHub forces a policy that every
single commit accepted into it's ecosystem has to be signed and there is no
way around it.</p>
<p>Let's put all the political, economical and social implications aside,
which all important by their nature nonetheless, and focus purely on some
technical implications. Would such strong a policy ensure, that all the
commits would in fact be signed? The answer is yes. No sign, no push.
Problem solved.</p>
<p>But would it mean that all the commits are signed <em>properly</em>? Let's
consider a scenario, where some user would mistakenly push the master key
used to sign the commits in the same repository? And what if some user did
so deliberately, as an act of rebellion against the forced policy?</p>
<p>Would this mean that all the commits are signed? Yes, this requirement
would still hold. But would it mean that all the commits are signed
properly? Definitely not, but I have yet to understand all or at least the
main implications of such scenario.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Even after all this research and years of telling to myself "I have to
start using GnuPG soon", especially for signing commits, not even getting
into encrypted/signed emails whatsoever, I still did not start.</p>
<p>The reason is that it looks like there are too many ways to do it wrong. It
also looks like it pays of in the long run to do it right, but that
obviously requires some research and very likely some investments into the
dedicated hardware. Either it all becomes more streamlined or I push myself
to learn ad set up it finally. Everything is impossible until you do it.</p>
<p>This is a 38th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
<h3 id="update">Update</h3>
<p>Just a few days after publishing this post I have found out that there is
only a single signature that is also published in the keyserver,
information can be queried as follows:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">gpg --search-keys</span><span> 4AEE18F83AFDEB23
</span><span>
</span></code></pre>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>gpg: data source: https://hkps.pool.sks-keyservers.net:443
</span><span>(1) GitHub (web-flow commit signing) <noreply@github.com>
</span><span> 2048 bit RSA key 4AEE18F83AFDEB23, created: 2017-08-16
</span></code></pre>
<p>I would still like to understand it deeper, hopefully I will stumble upon
some post explaining the problematic in a more digestible pieces.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://lwn.net/Articles/734767/">https://lwn.net/Articles/734767/</a></li>
<li><a href="https://riseup.net/en/security/message-security/openpgp/gpg-best-practices">https://riseup.net/en/security/message-security/openpgp/gpg-best-practices</a></li>
<li><a href="https://wiki.archlinux.org/index.php/GnuPG">https://wiki.archlinux.org/index.php/GnuPG</a></li>
<li><a href="https://www.gnupg.org/gph/en/manual/c481.html">https://www.gnupg.org/gph/en/manual/c481.html</a></li>
<li><a href="https://www.reddit.com/r/linuxquestions/comments/bd87pt/what_is_a_reasonably_secure_way_to_store_pgp_keys/">https://www.reddit.com/r/linuxquestions/comments/bd87pt/what_is_a_reasonably_secure_way_to_store_pgp_keys/</a></li>
</ul>
On warning fatigue or why not paying attention2021-04-16T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/warning-fatigue-not-paying-attention/<p>Do you ever actually <em>feel</em> fatigue? How do you think about it?</p>
<h2 id="what-is-a-fatigue">What is a fatigue?</h2>
<p>The definition that fits the term fatigue in the discussed context the
most, found on <a href="https://www.lexico.com/definition/Fatigue">lexico.com</a>:</p>
<blockquote>
<p>A lessening in one's response to or enthusiasm for something, caused by
overexposure.</p>
</blockquote>
<p>If I had to interpret the above, I would say that when something bothers
me, but I couldn't do absolutely anything about it for an extended period
of time, I would just stop giving a f...I would simply stop caring at all.</p>
<p>This might seem like a conscious process, but it very well may be rooted in
the unconscious part of the brain. If you knew all your life that the
particular traffic light is always red, would you ever look at instead of
checking the street?</p>
<p>That may have not been the best example of the unconscious behavior, but
this story goes on around situations that are very similar to the traffic
light scenario. I have just picked up traffic lights, because it is easy to
imagine.</p>
<h2 id="can-fatigue-be-felt">Can fatigue be felt?</h2>
<p>I am no expert on medical conditions, and I did not do much research
either. But it seems to me, that fatigue is commonly connected to the
tiredness, cynicism, hopelessness, exhaustion, trauma and burnout.</p>
<p>All these terms have some meaning and there a slight difference in
definition and an actual meaning between one a another. Covering all these
is not the point of this article.</p>
<p>What's more, digging deeper, there are some more specialized terms or
medical conditions that are connected to fatigue, namely a <em>compassion</em>
fatigue, a <em>vicarious</em> trauma or even emotional exhaustion. I would argue
that some of these are quite far from what I am conveying here, so let's
get back to basics.</p>
<p>I think fatigue can be felt on some levels, otherwise there would not be
that many terms describing <em>some</em> of it's symptoms. What I also think is
that, when there is no punishment for ignoring something, for instance that
ever-glowing red traffic light, the brain starts to ignore it actively.</p>
<h2 id="something-doesn-t-feel-right">Something doesn't feel right</h2>
<p>And surely, if that red light for once was green, I would unconsciously
know something is different, and that would in turn made me consciously
look around for cues, making it highly likely finding out that the traffic
light has finally changed the color.</p>
<p>Precisely because of the fact, that the brain sends the signal telling us
that, something doesn't feel right or that something around here is
<em>different</em> that it used to be, we start ignoring things that <em>doesn't</em>
change.</p>
<p>For instance, would you start thinking about the shape of the tree you
always walk around in the park, until it gets cut? Probably no, because
only after the tree gets missing from the landscape, the brain starts
nudging to look around for more details, as with the green light. It is
also connected to an another, somehow more famous example - what color is
every letter on a logo of the search engine (you know which one)?</p>
<p>The reason I am writing about all this is not because I experienced a
burnout in my job and I had to quit it because I did not see absolutely any
way of continuing and now I am mistakenly calling it fatigue. No, even
though burnout happened to me, my motives here are more profound.</p>
<h2 id="different-forms-of-fatigue">Different forms of fatigue</h2>
<p>Unlike a compassion fatigue, which leads to a diminished ability to feel
<em>for someone</em>, which is a medical condition, a <em>decision</em> fatigue for
instance appears to be something different. Apparently putting a different
noun in front of another one can change it's meaning profoundly.</p>
<p>Decision fatigue can occur to us on any given day. After we do some
individual amount of decisions on any given day, we simply don't feel like
doing any more decisions. We simply want to rest. I would not describe
decision fatigue as a medical condition, as the effects are more tight to
the person's energy levels at any given time. Next morning after a dose of
good sleep, we are fresh and ready to do some good (and some bad) decisions
during that day.</p>
<p>Once I was talking to my dad about decision fatigue, that I have first read
about in the book Atomic Habit by James Clear and he surprised me that he
already read about that before, although in a different book. The book
described why one man, that went on and built a billion dollar company with
a fruit logo almost everyone around the world knows, wore a black
turtleneck every day. He did not want to do mundane decisions every day, so
he would not feel fatigue when doing decisions that mattered.</p>
<h2 id="a-warning-fatigue">A warning fatigue</h2>
<p>Warning fatigue is similar to a decision fatigue in a way that it is not a
medical condition as far as I can tell. Where these two differ however is
on the replenishment. While decision fatigue needs an actual rest and a
brief disconnection from the cause of the fatigue (the decision making),
the warning fatigue replenishes only on a cue change.</p>
<p>What might be even more surprising is that the term warning fatigue appears
in a connection to the design of User Interfaces (UI). I have learned about
this term while reading some interesting post by Vincent Breitmoser. While
I am not entirely sure who coined the term, many hints point to the
researcher and author Jens Grossklags.</p>
<p>In a simple terms, we tend to pay very little attention to warnings message
appearing on the screen of the device we use actively. More specifically,
if the message can be dismissed or even has to be dismissed to continue
further, every time we do that, the warning becomes less important to us.
Eventually we dismiss the warning without thinking about its message at
all.</p>
<p>This is, loosely put, why we don't pay attention to warnings on the
screens.</p>
<p>This is a 37th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://www.in.tum.de/fileadmin/w00bws/cybertrust/papers/2019-DPH-Billmann.pdf">https://www.in.tum.de/fileadmin/w00bws/cybertrust/papers/2019-DPH-Billmann.pdf</a></li>
<li><a href="https://k9mail.app/2017/01/30/OpenPGP-Considerations-Part-II.html">https://k9mail.app/2017/01/30/OpenPGP-Considerations-Part-II.html</a></li>
</ul>
Install F-Droid on Arch via Anbox2021-04-15T00:00:00+00:002021-07-07T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/install-fdroid-arch-via-anbox/<p>F-Droid is an installable catalogue of FOSS applications for the Android
platform. Apps I like:</p>
<ul>
<li><a href="https://f-droid.org/en/packages/eu.faircode.email">FairEmail</a> as
feature-packed email client</li>
<li><a href="https://f-droid.org/en/packages/com.nutomic.syncthingandroid">Syncthing</a>
for file synchronization</li>
<li><a href="https://f-droid.org/en/packages/com.kunzisoft.keepass.libre">KeePassDX</a>
as a password manager</li>
<li><a href="https://f-droid.org/en/packages/org.mian.gitnex">GitNex</a> as a Gitea
client</li>
<li><a href="https://f-droid.org/en/packages/com.keylesspalace.tusky">Tusky</a> as
Pleroma client</li>
</ul>
<p>Sometimes I like to document their features quickly, so I run them on a
laptop.</p>
<h2 id="zen-linux">Zen Linux</h2>
<p>Zen is a result of a collaborative effort of kernel hackers to provide the
best Linux kernel possible for everyday systems.</p>
<p>Install <code>linux-zen</code> kernel
<a href="https://github.com/archlinux/svntogit-packages/commit/ec47edcc45f73b4946015e9f28a419e27db2fda3#diff-3e341d2d9c67be01819b25b25d5e53ea3cdf3a38d28846cda85a195eb9b7203a">compiled with</a>
<code>ashmem</code> and <code>binderfs</code>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -S</span><span> linux-zen linux-zen-headers
</span></code></pre>
<p><strong>Disclaimer:</strong> Consider doing research before using non-default
<a href="https://archlinux.org/packages/?name=linux">linux</a> kernel.</p>
<h3 id="grub-bootloader">GRUB bootloader</h3>
<p>Re-generate GRUB entries including <code>linux-zen</code>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> grub-mkconfig</span><span style="color:#bf616a;"> -o</span><span> /boot/grub/grub.cfg
</span></code></pre>
<p>Reboot and choose <code>zen-linux</code> kernel in the GRUB menu. Verify the zen
kernel is booted:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">$</span><span> uname</span><span style="color:#bf616a;"> -r
</span><span style="color:#bf616a;">5.11.13-zen1-1-zen
</span></code></pre>
<p>The version will be different, depending on the actual release.</p>
<h3 id="binderfs">binderfs</h3>
<p>Also verify that the kernel was configured properly:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">$</span><span> zgrep</span><span style="color:#bf616a;"> -e</span><span> ASHMEM</span><span style="color:#bf616a;"> -e</span><span> BINDER /proc/config.gz
</span><span style="color:#bf616a;">CONFIG_ASHMEM</span><span>=</span><span style="color:#a3be8c;">y
</span><span style="color:#bf616a;">CONFIG_ANDROID_BINDER_IPC</span><span>=</span><span style="color:#a3be8c;">y
</span><span style="color:#bf616a;">CONFIG_ANDROID_BINDERFS</span><span>=</span><span style="color:#a3be8c;">y
</span><span style="color:#bf616a;">CONFIG_ANDROID_BINDER_DEVICES</span><span>=""
</span></code></pre>
<p>Anbox requires a binderfs directory, create it at boot setting up
<code>/etc/tmpfiles.d/anbox.conf</code> :</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>d! /dev/binderfs 0755 root root
</span></code></pre>
<p>Mount the binderfs mountpoint created above at boot as well via
<code>/etc/fstab</code>:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>none /dev/binderfs binder nofail 0 0
</span></code></pre>
<p>Including <code>nofail</code> option makes for a smooth boot, as the host system does
not depend on Anbox.</p>
<h2 id="anbox-with-houdini">Anbox with Houdini</h2>
<p>Anbox is a container-based software for running Android on GNU/Linux
distributions.</p>
<p>Install Anbox image with Houdini (used for x86_64 hosts, so most laptops):</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">yay -S</span><span> anbox-git anbox-image-houdini
</span></code></pre>
<p>Enable Anbox service:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> systemctl enable</span><span style="color:#bf616a;"> --now</span><span> anbox-container-manager.service
</span></code></pre>
<p>Create a bridged network:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">nmcli</span><span> con add type bridge ifname anbox0 -- connection.id anbox-net ipv4.method shared ipv4.addresses 192.168.250.1/24
</span></code></pre>
<p>Open Anbox window.</p>
<h2 id="f-droid-inside-anbox">F-Droid inside Anbox</h2>
<p>Download F-Droid files:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">wget</span><span> https://f-droid.org/F-Droid.apk && </span><span style="color:#bf616a;">https://f-droid.org/F-Droid.apk.asc
</span></code></pre>
<p>Verify the package's integrity:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">gpg --auto-key-retrieve --verify</span><span> F-Droid.apk.asc
</span></code></pre>
<p>Make sure the <strong>Good signature</strong> is displayed and the Fingerprint matches
<a href="https://f-droid.org/en/docs/Release_Channels_and_Signing_Keys/">F-Droid releases</a>.</p>
<h3 id="adb">adb</h3>
<p>Install adb from android-tools:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -S</span><span> android-tools
</span></code></pre>
<p>Now install F-Droid into Anbox:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">adb</span><span> install F-Droid.apk
</span></code></pre>
<p>Launch F-Droid in Anbox, wait till the repositories synchronize, install
favorite apps and run them on Arch!</p>
<p>This is a 36th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://wiki.archlinux.org/index.php/Anbox">https://wiki.archlinux.org/index.php/Anbox</a></li>
<li><a href="https://www.reddit.com/r/archlinux/comments/m2dioi/anyone_having_issues_with_enabling_the_modules/">https://www.reddit.com/r/archlinux/comments/m2dioi/anyone_having_issues_with_enabling_the_modules/</a></li>
<li><a href="https://www.reddit.com/r/archlinux/comments/cgqptu/what_is_a_zen/">https://www.reddit.com/r/archlinux/comments/cgqptu/what_is_a_zen/</a></li>
</ul>
Solution to tracepath no reply2021-04-14T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/solution-tracepath-no-reply/<p>Consider tracepath as a network diagnostic tool:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>tracepath peterbabic.dev
</span></code></pre>
<p>The output might be surprising:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span> 1?: [LOCALHOST] pmtu 1500
</span><span> 1: _gateway 0.748ms
</span><span> 1: _gateway 0.676ms
</span><span> 2: 192.168.66.1 1.564ms
</span><span> 3: no reply
</span><span> 4: no reply
</span><span> 5: no reply
</span><span> 6: abc.bcd.def.tld 30.697ms asymm 7
</span><span> 7: no reply
</span><span> 8: no reply
</span><span> 8: no reply
</span><span> 9: no reply
</span><span>10: no reply
</span><span> ...
</span><span>30: no reply
</span><span> Too many hops: pmtu 1500
</span><span> Resume: pmtu 1500
</span></code></pre>
<p>Ideally, the actual path is found. Instead, the maximum number of hops is
reached. The path exists however:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">traceroute</span><span> peterbabic.dev
</span></code></pre>
<p>Traceroute returns the path in under 10 hops (TTL). Matt's traceroute
outputs the equivalent:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">mtr -w</span><span> peterbabic.dev
</span></code></pre>
<p>Both outputs has been omitted here. The question is, why is tracepath
struggling?</p>
<h2 id="initial-port">Initial port</h2>
<p>I have tried all the <a href="https://manned.org/tracepath">options</a> tracepath
offers to no avail. One option is puzzling:</p>
<blockquote>
<p><code>- p </code> Sets the initial destination port to use</p>
</blockquote>
<p>What does <em>initial</em> port mean? There is no mention of tracepath under
<a href="https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?search=tracepath">well known ports</a>.</p>
<h2 id="well-known-ports">Well known ports</h2>
<p>There however exist well known ports that contain string
<a href="https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?search=trace">*trace*</a>:</p>
<table><thead><tr><th><strong>Service Name</strong></th><th><strong>Port</strong></th><th><strong>Protocol</strong></th><th><strong>Description</strong></th></tr></thead><tbody>
<tr><td>ctf</td><td>84</td><td>tcp</td><td>Common Trace Facility</td></tr>
<tr><td>ctf</td><td>84</td><td>udp</td><td>Common Trace Facility</td></tr>
<tr><td>di-traceware</td><td>3041</td><td>tcp</td><td>di-traceware</td></tr>
<tr><td>di-traceware</td><td>3041</td><td>udp</td><td>di-traceware</td></tr>
<tr><td>rtraceroute</td><td>3765</td><td>tcp</td><td>Remote Traceroute</td></tr>
<tr><td>rtraceroute</td><td>3765</td><td>udp</td><td>Remote Traceroute</td></tr>
<tr><td>clever-ctrace</td><td>6687</td><td>tcp</td><td>CleverView for cTrace Message Service</td></tr>
<tr><td>speedtrace</td><td>33334</td><td>tcp</td><td>SpeedTrace TraceAgent</td></tr>
<tr><td>speedtrace-disc</td><td>33334</td><td>udp</td><td>SpeedTrace TraceAgent Discovery</td></tr>
<tr><td><strong>traceroute</strong></td><td>33434</td><td>tcp</td><td>traceroute use</td></tr>
<tr><td><strong>traceroute</strong></td><td>33434</td><td>udp</td><td>traceroute use</td></tr>
<tr><td>mtrace</td><td>33435</td><td>udp</td><td>IP Multicast Traceroute</td></tr>
<tr><td>dccp-ping</td><td></td><td>dccp</td><td>ping/traceroute using DCCP</td></tr>
</tbody></table>
<p>I did not know any mentioned services, apart from traceroute. Looking
around at its
<a href="https://man.archlinux.org/man/core/traceroute/traceroute.8.en#default">manual</a>:</p>
<blockquote>
<p><strong>LIST OF AVAILABLE METHODS</strong></p>
<p><strong>default</strong> The traditional, ancient method of tracerouting. Used by
default.</p>
<p>Probe packets are udp datagrams with so-called "unlikely" destination
ports. The "unlikely" port of the first probe is 33434, then for each
next probe it is incremented by one. Since the ports are expected to be
unused, the destination host normally returns "icmp unreach port" as a
final response. (Nobody knows what happens when some application listens
for such ports, though).</p>
<p>This method is allowed for unprivileged users.</p>
</blockquote>
<p>Provides for some interesting reading. Who knew?</p>
<h2 id="the-magic-number-33434">The magic number 33434</h2>
<p>Decided to try the well known traceroute port of 33434 as the initial port
of tracepath:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">tracepath -p</span><span> 33434 peterbabic.dev
</span></code></pre>
<p>The path is found!</p>
<h2 id="a-range-of-options">A range of options</h2>
<p>It looks like the port number is incrementing starting at 33434, there
should be a range. Hint at
<a href="https://en.wikipedia.org/w/index.php?title=Traceroute&oldid=1002482834#Implementations">Wikipedia</a>:</p>
<blockquote>
<p>On Unix-like operating systems, traceroute sends, by default, a sequence
of User Datagram Protocol (UDP) packets, with destination port numbers
ranging from 33434 to 33534;</p>
</blockquote>
<p>Testing with higher initial port:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">tracepath -p</span><span> 33499 peterbabic.dev
</span></code></pre>
<p>Finds the path still. In fact, my tests shows it works approximately
according to formula:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>initial port = 33534 - ( required hops + c )
</span></code></pre>
<p>Where <code>c</code> is a constant number with the value around 10. Not sure why.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Using tracepath command to find the path to the host over the network:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">tracepath</span><span> host.example.com
</span></code></pre>
<p>If the command ends with the unhelpful response:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>no reply
</span><span>no reply
</span><span>no reply
</span><span>...
</span><span>Too many hops.
</span></code></pre>
<p>Use port 33434 as an initial port:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">tracepath -p</span><span> 33434 host.example.com
</span></code></pre>
<p>This is a 35th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://knowledgebase.paloaltonetworks.com/KCSArticleDetail?id=kA10g000000ClqBCAS">https://knowledgebase.paloaltonetworks.com/KCSArticleDetail?id=kA10g000000ClqBCAS</a></li>
<li><a href="https://serverfault.com/questions/623996/how-to-enable-traceroute-in-linux-machine">https://serverfault.com/questions/623996/how-to-enable-traceroute-in-linux-machine</a></li>
<li><a href="https://www.speedguide.net/port.php?port=33434">https://www.speedguide.net/port.php?port=33434</a></li>
<li><a href="https://www.educba.com/linux-tracepath/">https://www.educba.com/linux-tracepath/</a></li>
</ul>
Restoring Nginx configuration on Arch2021-04-13T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/restoring-nginx-confing-arch/<p>Trying to learn Ansible on Arch node I made a decision to purge Nginx
configuration:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> systemctl stop nginx.servie
</span><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -Rnc</span><span> nginx
</span><span style="color:#bf616a;">sudo</span><span> rm</span><span style="color:#bf616a;"> -rf</span><span> /etc/nginx
</span></code></pre>
<p>Hoping that reinstalling <code>nginx</code> (or maybe <code>nginx-mainline</code>) would restore
all files:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -S</span><span> nginx
</span><span style="color:#bf616a;">sudo</span><span> systemctl start nginx.service
</span></code></pre>
<p>Unfortunately, starting <code>nginx.service</code> was no longer possible:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>Job for nginx.service failed because the control process exited with error code.
</span></code></pre>
<p>Looking for possible cause in the system journal:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> journalctl</span><span style="color:#bf616a;"> -xeu</span><span> nginx
</span></code></pre>
<p>Proved to be fruitful:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>open() "/etc/nginx/mime.types" failed (2: No such file or directory) in
</span><span>/etc/nginx/nginx.conf:18
</span></code></pre>
<p>Accepted solution on
<a href="https://unix.stackexchange.com/a/606554/109352">Unix Stack Exchange</a>
suggested grabbing <code>mime.types</code> from
<a href="https://raw.githubusercontent.com/nginx/nginx/master/conf/mime.types">upstream</a>
source.</p>
<h2 id="package-file-ownership">Package file ownership</h2>
<p>The file <code>/etc/nginx/mime.types</code> had to come from <em>somewhere</em>, probing
Nginx package:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pacman -Fl</span><span> nginx | </span><span style="color:#bf616a;">grep</span><span> mime
</span></code></pre>
<p>Shown no such file owned by the package, yet fresh <code>/etc/nginx/nginx.conf</code>
requires it:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>...
</span><span>http {
</span><span> include mime.types;
</span><span> default_type application/octet-stream;
</span><span>...
</span></code></pre>
<p>Checking which package owns <code>mime.types</code>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pacman -F</span><span> /etc/nginx/mime.types
</span></code></pre>
<p>The file <code>/etc/nginx/mime.types</code> comes from the package
<a href="https://archlinux.org/packages/extra/any/mailcap/">mailcap</a>.</p>
<h2 id="package-relations">Package relations</h2>
<p>Why is file residing in <code>/etc/nginx</code> not coming from the <code>mailcap</code> package?
Are they related?</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pacman -Si</span><span> nginx | </span><span style="color:#bf616a;">grep -i</span><span> mailcap
</span></code></pre>
<p>The output shows that Nginx <em>Depends on</em> <code>mailcap</code>, this is a relief.
Reinstalling both:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -S</span><span> nginx mailcap
</span></code></pre>
<p>Pacman confirms that missing <code>mime.types</code> file confuses it as well,
restores the file:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>:: Proceed with installation? [Y/n]
</span><span>(2/2) checking keyring...
</span><span>(2/2) checking package integrity...
</span><span>(2/2) loading package files...
</span><span>(2/2) checking for file conflicts...
</span><span>(2/2) checking available disk space...
</span><span>warning: could not get file information for etc/nginx/mime.types
</span></code></pre>
<p>The Nginx service starts now.</p>
<h2 id="cross-verification">Cross verification</h2>
<p>I decided to check if I other packages in <code>/etc/nginx</code> are owned by other
packages:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">ls</span><span> /etc/nginx | </span><span style="color:#bf616a;">xargs -I </span><span>% sh</span><span style="color:#bf616a;"> -c </span><span>'</span><span style="color:#a3be8c;">pacman -F /etc/nginx/%; printf "\n"</span><span>'
</span></code></pre>
<p>Looking around the somewhat human-readable output, shows only <code>mime.types</code>:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>etc/nginx/fastcgi.conf is owned by extra/nginx 1.18.0-2
</span><span>etc/nginx/fastcgi.conf is owned by community/nginx-mainline 1.19.6-2
</span><span>
</span><span>etc/nginx/fastcgi_params is owned by extra/nginx 1.18.0-2
</span><span>etc/nginx/fastcgi_params is owned by community/nginx-mainline 1.19.6-2
</span><span>
</span><span>etc/nginx/koi-utf is owned by extra/nginx 1.18.0-2
</span><span>etc/nginx/koi-utf is owned by community/nginx-mainline 1.19.6-2
</span><span>
</span><span>etc/nginx/koi-win is owned by extra/nginx 1.18.0-2
</span><span>etc/nginx/koi-win is owned by community/nginx-mainline 1.19.6-2
</span><span>
</span><span>etc/nginx/mime.types is owned by extra/mailcap 2.1.49-1
</span><span>
</span><span>etc/nginx/nginx.conf is owned by extra/nginx 1.18.0-2
</span><span>etc/nginx/nginx.conf is owned by community/nginx-mainline 1.19.6-2
</span><span>
</span><span>etc/nginx/scgi_params is owned by extra/nginx 1.18.0-2
</span><span>etc/nginx/scgi_params is owned by community/nginx-mainline 1.19.6-2
</span><span>
</span><span>etc/nginx/uwsgi_params is owned by extra/nginx 1.18.0-2
</span><span>etc/nginx/uwsgi_params is owned by community/nginx-mainline 1.19.6-2
</span><span>
</span><span>etc/nginx/win-utf is owned by extra/nginx 1.18.0-2
</span><span>etc/nginx/win-utf is owned by community/nginx-mainline 1.19.6-2
</span></code></pre>
<p>And to be extra sure that there is <em>just this one</em> alien file:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">ls</span><span> /etc/nginx | </span><span style="color:#bf616a;">xargs -I </span><span>% pacman</span><span style="color:#bf616a;"> -F</span><span> /etc/nginx/% | </span><span style="color:#bf616a;">cut -d</span><span>' '</span><span style="color:#bf616a;"> -f5 </span><span>| </span><span style="color:#bf616a;">grep -v</span><span> nginx
</span></code></pre>
<p>The above returns only <code>extra/mailcap</code> package.</p>
<h2 id="in-other-packages">In other packages</h2>
<p>Having one package providing configuration file to the different config
directory happens:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pacman -Fl</span><span> syncthing | </span><span style="color:#bf616a;">grep</span><span> ufw
</span></code></pre>
<p>Package
<a href="https://archlinux.org/packages/community/x86_64/syncthing/">syncthing</a>
provides files into <code>/etc/ufw/</code>:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>syncthing etc/ufw/
</span><span>syncthing etc/ufw/applications.d/
</span><span>syncthing etc/ufw/applications.d/ufw-syncthing
</span></code></pre>
<p>I have already hinted about this behavior in one of the previous
<a href="/blog/install-syncthing-archlinux-arm/">posts</a>.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://bugs.archlinux.org/task/56532">https://bugs.archlinux.org/task/56532</a></li>
<li><a href="https://bbs.archlinux.org/viewtopic.php?id=232313">https://bbs.archlinux.org/viewtopic.php?id=232313</a></li>
<li><a href="https://lists.archlinux.org/pipermail/arch-dev-public/2017-November/029036.html">https://lists.archlinux.org/pipermail/arch-dev-public/2017-November/029036.html</a></li>
</ul>
<p>This is a 34th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Using pacman with Ansible2021-04-12T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/using-pacman-with-ansible/<p>To use ansible on the Arch based system, either for a local provision or on
the VPS such as Linode or Contabo to name the ones that I have tested,
these steps are required:</p>
<ul>
<li>Install Community General Collection:</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">ansible-galaxy</span><span> collection install community.general
</span></code></pre>
<p>Optionally check the <code>community.general.pacman</code> plugin is available:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">ansible-doc -l </span><span>| </span><span style="color:#bf616a;">grep</span><span> pacman
</span></code></pre>
<p>To clean above output, consider disabling deprecation warnings in your
<code>ansible.cfg</code>:</p>
<pre data-lang="ini" style="background-color:#2b303b;color:#c0c5ce;" class="language-ini "><code class="language-ini" data-lang="ini"><span style="color:#bf616a;">deprecation_warnings </span><span>= </span><span style="color:#d08770;">False
</span></code></pre>
<ul>
<li>Create an inventory file with an extension you prefer, for example
<code>inventory.cfg</code>:</li>
</ul>
<pre data-lang="ini" style="background-color:#2b303b;color:#c0c5ce;" class="language-ini "><code class="language-ini" data-lang="ini"><span style="color:#b48ead;">[arch]
</span><span style="color:#bf616a;">example</span><span>.com ansible_user=sudouser
</span><span>
</span><span style="color:#b48ead;">[arch:vars]
</span><span style="color:#bf616a;">ansible_python_interpreter</span><span>=/usr/bin/python3
</span></code></pre>
<p>Specifying the Python interpreter in Arch based distributions reduces
warnings.</p>
<ul>
<li>Create a playbook, the convention is to name it <code>main.yml</code>:</li>
</ul>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>---
</span><span>- hosts: arch
</span><span> tasks:
</span><span> - name: Install a package
</span><span> community.general.pacman:
</span><span> name: neofetch
</span><span> state: present
</span></code></pre>
<ul>
<li>Run, provide sudo password for the <code>sudouser</code>, installing <code>neofetch</code>:</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">ansible-playbook -i</span><span> inventory.cfg main.yml</span><span style="color:#bf616a;"> --become --ask-become-pass
</span></code></pre>
<p>The playbook is equivalent of running the following on the system:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -S --needed</span><span> neofetch
</span></code></pre>
<p>Note that since <code>--needed</code> argument is passed in, the packages would not be
re-installed. I could not find this in documentation, but it is quite clear
from the
<a href="https://github.com/ansible-collections/community.general/blob/main/plugins/modules/packaging/os/pacman.py#L325-L326">source comments</a>.</p>
<p>Look at more examples in the official
<a href="https://docs.ansible.com/ansible/latest/collections/community/general/pacman_module.html#examples">docs</a>.</p>
<p>This is a 33th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Release: Gitea 1.14.02021-04-11T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/release-gitea-1-14-0/<p>Today marks the release date of the Gitea 1.14.0, only a few days after the
release of the patch version 1.13.7. Although 1.14.0 is a minor release
from the semver's perspective, it contains a lot of changes. I decided to
add comments on the Features I am the most excited about.</p>
<h2 id="minimal-openid-connect-implementation">Minimal OpenID Connect implementation</h2>
<p>The issue <a href="https://github.com/go-gitea/gitea/issues/14139">#14139</a> was
merged into this release as a next step towards full OpenID implementation.
Implementing OpenID would help towards the Single-Sign-On (SSO)
functionality, either across Gitea instances or even across different
services on the network.</p>
<p>Currently, a similar, but far more limited functionality is offered via
OAuth2. OAuth2 allows users to log into Gitea via a 3rd party provider, for
instance using a GitHub or Twitter account. This is very convenient as user
just needs a few clicks and his account's credentials and avatar^[Gitea can
automatically download an avatar from in a privacy respecting way using a
federated avatar service <a href="https://www.libravatar.org/">libravatar.org</a>,
provided th instance has the feature enabled and the user has the avatar
set-up. Is it unclear however, how big is the intersection of the two.] is
transferred into Gitea, but the process promotes centralization.</p>
<p>As Gitea is a self-hosted platform, it inherently promotes
decentralization. A major Gitea provider, <a href="https://codeberg.org/">https://codeberg.org/</a> has no
OAuth2 authentication source enabled specifically to protect the privacy of
it's users. The downside of this is that every user has to manually create
and verify the account, which usually makes the difference between Starring
the repository or filling an Issue and leaving the page.</p>
<p>Implementing Single-Sign-On could in theory allow users that register on
one Gitea instance log in to other Gitea instance without the need to
create another account there. As OpenID is an extension of OAuth2 and also
centralized by design, it has yet to be seen the pace of the adaptation en
large, should SSO become fully supported Gitea feature. Support for SSO can
however still be very useful for smaller organizations using custom tech
stacks, as far as the current trend goes.</p>
<h2 id="add-support-for-mastodon-oauth2-provider">Add support for Mastodon OAuth2 provider</h2>
<p>Speaking of OAuth2, an interesting feature described in
<a href="https://github.com/go-gitea/gitea/issues/13293">#13293</a> enables Gitea
instances to use Mastodon, a primary Fediverse microblogging
representative, as an OAuth2 provider. This is a great step that plays
nicely with decentralization trends. It's not uncommon these days for
developers to have a self-hosted <a href="https://gitea.io">Gitea</a> along with a
Fediverse <a href="https://babic.dev">microblog account</a>.</p>
<p>Better integration of the two is not something entirely useful for an
individual, but it becomes useful once more users stick around. Such
integration is especially welcoming for new users, that can be guided
around with less confusion.</p>
<h2 id="display-svg-files-as-images">Display SVG files as images</h2>
<p>Finally, fans of the SVG image format can properly display it inside Gitea,
which a great addition, detailed in issue
<a href="https://github.com/go-gitea/gitea/issues/14101">#14101</a>. Up until this
point, SVG were not rendered but displayed as a text, which was frustrating
at best. The feature is enabled by default after an upgrade, so there
should be an easy path towards using it even today. Depending on the
organization, messing up upgrades during the weekend can provide some time
for ironing the issues out until Monday comes.</p>
<h2 id="create-rootless-docker-image">Create Rootless Docker image</h2>
<p>The trend set out with the Podman to use rootless containers caught on to
Docker as well. Gitea's issue
<a href="https://github.com/go-gitea/gitea/issues/10154">#10154</a> enables to use
this still somewhat experimental functionality. For many organizations and
even individuals, the code might be the bread and butter, so focusing on
it's security is usually a high priority. Any working step towards this
direction is usually welcome.</p>
<p>This is a 32th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
<h2 id="footnotes">Footnotes</h2>
Hate speech in the Fediverse?2021-04-10T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/hate-speech-in-fediverse/<p>Fediverse is still an unknown for <em>a lot</em> of people. It is also unregulated
by no central entity and everyone can impose their own rules. Critics say
that such environment attracts actors that are thrown out of other
communities and that these actors are actually bad actors.</p>
<p>Why would anyone use the same thing the bad guys are using? Why would
anyone want to even touch it and possibly be exposed to them? Well this
depends on what the bad guys do. The worst thing I can currently imagine is
the act of spreading hate speech. I do see someone posting something that
would fall into this category <em>very occasionally</em>, but it pales into
comparison of hate available in the comment sections under the post in
other big centralized network.</p>
<p>I have some especially bad experiences with a big centralized social
network that rhymes with Lakewood. Most of the time, when I somehow get
tricked to even wander there, the comment section under the given post
aggregates people that are in a hateful agreement, or are otherwise
negative.</p>
<p>While there are far, far less users in the Fediverse, the amount of
positive comments and posts I am encountering is huge and I like it that
way. Sure, I did some blocking by observing user and instances that I am
comfortable around and imitating their blocking patterns.</p>
<p>But I did so only recently. Approximately first 9 months of use there were
no blocks set up on my side, and I cannot really tell much difference now,
after the blocks, but I wanted to try what happens.</p>
<p>This is a 31th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
SSH prompting KeePassXC unlock2021-04-09T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/make-ssh-prompt-password-keepassxc/<p>Most servers I connect to have the option <code>PasswordAuthentication</code> set to
<code>no</code>, meaning I do more often than not see an error:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>Permission denied (publickey).
</span></code></pre>
<p>The reasons for this are multiple, but in my scenario, this happens because
there are no identities (keys) present in the SSH agent.</p>
<h3 id="ssh-agent">SSH agent</h3>
<p>SSH agent holds OpenSSH keys used for public key authentication (public key
authentication method is vastly superior to plain-text password
authentication and should be preferred whenever possible), and hands the
right one over to the server during connection. <em>OpenSSH keys</em> in itself is
a rather broad term. For sake of this article, the term OpenSSH key here
refers to the context of public key authentication. Public key
authentication actually requires a key pair - obviously the public key and
it's matched counterpart, a private key.</p>
<p>The default SSH agent shipped with OpenSSH is <code>ssh-agent</code>. It has a rather
basic feature set, as it expected. There are software packages available
that can acts as an SSH agent and they feature sets differ. There can be
only one active SSH agent running on the system at a given time.</p>
<h3 id="keyrings-and-password-managers">Keyrings and Password managers</h3>
<p>Many distributions ship with
<a href="https://wiki.gnome.org/Projects/GnomeKeyring">Gnome Keyring</a>, a password
manager/keyring (branded also as User Credentials Manager, implying it is
able to handle multiple types of credentials). These two terms are not the
same, but for clarity, this article individually refers to Gnome Keyring as
<em>keyring</em> and to KeePassXC as a <em>password manager</em>. Both software packages
overlap greatly in the features primarily discussed here, thus the terms
appear together.</p>
<p>Both KeePassXC and especially Gnome Keyring can handle multiple types of
<em>secrets</em> or <em>user credentials</em> (some of them overlapping) such as
passwords, security certificates and Freesdesktop.org secrets in addition
to the keys such as GnuPG keys, and the core of this topic, OpenSSH keys.
Keyring's/password manager's ability to handle OpenSSH keys means it also
features an SSH agent implementation.</p>
<h3 id="private-key-passphrase">Private key passphrase</h3>
<p>Private key can and (unless there is a valid reason not to) should be
additionally protected with a passphrase. If protected, a valid passphrase
is required before the key can be used for an actual authentication.</p>
<p>Extending the previous definition, Keyring's/password manager's ability to
handle OpenSSH key, additionally means, it can store, and later recall, the
stored private key's passphrase.</p>
<h3 id="passphrase-prompts">Passphrase prompts</h3>
<p>In the most basic sense, <code>ssh-agent</code> prompts for a passphrase when the
passphrase protected key is added into it. This process is synchronous in a
sense that the terminal expects user to provide the passphrase before next
command can be issued. The agent is preventing subsequent prompts for the
same key already added into it, until the key is finally removed from it.
Reducing the amount of passphrase prompts is in fact the main
responsibility of the agent.</p>
<p>Keyring and Gnome Keyring work differently compared to the <code>ssh-agent</code> with
respect to adding keys into their respective agent implementations. When
configured for this, user is not required to provide passphrases for
individual keys, as the passphrases are stored within keyring/password
manager's database. Instead, user is prompted for a single master password
when unlocking the keyring (with Gnome keyring there usually is no
<em>visible</em> prompt for the SSH passphrase at all. This is by design, as an
user account password is used as a keyring master password, unlocking the
keyring automatically after user logs in) or the database.</p>
<p>The problem arises in situations where the prompt is invoked
asynchronously - when there is no terminal associated to it. This is
generally the case with programs employing a graphical user interface
(GUI). Since Gnome Keyring and KeePassXC have their own graphical
interfaces, they are both affected.</p>
<h3 id="ssh-askpass-and-ssh-add">SSH_ASKPASS and ssh-add</h3>
<p>Manipulating keys in the agent is done by <code>ssh-add</code>. While there are
currently multiple agents available, there's just a single widely used
<code>ssh-add</code>. All OpenSSH agents mentioned strive to be compatible with the
<code>ssh-add</code> command, otherwise they would not be very useful.</p>
<p>One particularly problematic scenario with the passphrase prompt and both
discussed GUI programs is the
<a href="https://man.archlinux.org/man/core/openssh/ssh-add.1.en#c"><code>-c</code> parameter</a>
of <code>ssh-add</code> (while technically KeePassXC and Gnome Keyring would work well
enough as an agents without implementing this functionality, they did
implement it over time, because of the community requests for a full
compatibility with the <code>ssh-agent</code>), flagging the key added to the agent
for <em>confirmation</em> before being used. The key flagged like this is still
added to the keyring/password manager's agent, but expects to prompt the
passphrase into a floating dialog the moment a key is actually used. This
functionality in turn requires at least a properly configured
<a href="https://man.archlinux.org/man/core/openssh/ssh.1.en#SSH_ASKPASS">SSH_ASKPASS</a>
environmental variable, pointing to a working graphical prompt
implementation.</p>
<h3 id="keepassxc-and-user-confirmation">KeePassXC and user confirmation</h3>
<p>There is an important and useful setting in a KeePassXC entry under SSH
tab, removing keys from agent when the database is locked (because of tight
coupling with user login session, Gnome Keyring doesn't offer an option to
the keys from the agent, which can theoretically increase the potential for
their misuse. Original <code>ssh-agent</code> agent and specifically KeePassXC, both
operating separately from the user login session, have options to
automatically remove the keys from the agent). Enable it by checking:</p>
<ul>
<li><input disabled="" type="checkbox" checked=""/>
Remove key from agent when database is closed/locked</li>
</ul>
<p>It is worth noting, that KeePassXC additionally to storing the passphrase
also offers to store the actual private key in the database as an
<em>attachment</em>. With above setting enabled, further enabling user
confirmation under the same SSH tab is a great hindrance when both, the key
and its passphrase are stored inside the same database. To increase
security, one should always strive to store different security factors on
different locations (it is a safer approach to store the different
<em>factors</em> of a security in different places. For instance, when using
password + 2FA/MFA in a form of TOTP, store password in the manager and the
other in the phone. This applies for SSH keys as well, for instance storing
the private key in the filesystem /something I own/ and its passphrase it
in a brain /something I know/. In practice, storing <em>every</em> available
factor in single one secure <code>.kdbx</code> database with a strong master password
is still marginally better than omitting some available factors, such as
using just a password without TOTP or using a ssh key without a
passphrase).</p>
<p>Furthermore, when user confirmation is enabled, but <code>askpass</code> is
mis-configured, SSH will stop working, displaying an error:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>sign_and_send_pubkey:
</span><span>signing failed for RSA "peter@peterbabic.dev" from agent: agent refused operation
</span></code></pre>
<p>With the above in mind, I would advise considering enabling user
confirmation <strong>only when either:</strong></p>
<ul>
<li>The actual private key and its passphrase are both stored in a
<em>different</em> databases</li>
<li>Keys <em>do not</em> get automatically removed from agent when the database is
locked</li>
</ul>
<h2 id="preparation">Preparation</h2>
<p>Before making commands that require <code>ssh</code> automatically prompt for a
KeePassXC database unlock dialog, there are some more steps required. Since
there can only be one SSH agent active at a time, other agents possibly
running have to be disabled or configured differently, before the KeePassXC
agent implementation can be used.</p>
<ul>
<li>Start <code>ssh-agent</code> at login as described in
<a href="https://wiki.archlinux.org/index.php/SSH_keys#Start_ssh-agent_with_systemd_user">SSH_keys#Start_ssh-agent_with_systemd_user</a></li>
<li>When on distribution where Gnome Keyring is present, disable it:</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">chmod -x</span><span> /usr/bin/gnome-keyring-daemon
</span></code></pre>
<ul>
<li>
<p>Further, when on actual Gnome desktop environment, continue with
<a href="https://wiki.archlinux.org/index.php/GNOME/Keyring#Disable_keyring_daemon_components">GNOME/Keyring#Disable_keyring_daemon_components</a></p>
</li>
<li>
<p>Configure KeePassXC Settings accordingly:</p>
</li>
</ul>
<blockquote>
<p><strong>General</strong> > <strong>Startup</strong></p>
<ul>
<li><input disabled="" type="checkbox" checked=""/>
Start only a single instance of KeePassXC</li>
<li><input disabled="" type="checkbox" checked=""/>
Minimize window after unlocking database</li>
<li><input disabled="" type="checkbox" checked=""/>
Remember previously used databases
<ul>
<li><input disabled="" type="checkbox" checked=""/>
Load previously open databases on startup</li>
</ul>
</li>
</ul>
<p><strong>Security</strong> > <strong>Convenience</strong></p>
<ul>
<li><input disabled="" type="checkbox" checked=""/>
Lock databases when session is locked or lid is closed</li>
</ul>
<p><strong>SSH Agent</strong></p>
<ul>
<li><input disabled="" type="checkbox" checked=""/>
Enable SSH Agent integration</li>
</ul>
</blockquote>
<ul>
<li>Configure an actual SSH entry accordingly:</li>
</ul>
<blockquote>
<p><strong>Edit entry</strong> > <strong>Entry</strong></p>
<p>Password: [KEY_PASSPHRASE]</p>
<p><strong>Edit entry</strong> > <strong>SSH Agent</strong></p>
<ul>
<li><input disabled="" type="checkbox" checked=""/>
Add key to agent when database is opened/unlocked</li>
<li><input disabled="" type="checkbox" checked=""/>
Remove key from agent when database is closed/locked</li>
</ul>
<p><strong>Edit entry</strong> > <strong>SSH Agent</strong> > <strong>Private key</strong></p>
<p>Insert either valid [Attachment] OR [External file]</p>
</blockquote>
<h3 id="test-the-setup">Test the setup</h3>
<p>Before continuing, test everything works properly. Start by unlocking the
KeePassXC database and running <code>ssh-add -l</code>, the output should be:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>4096 SHA256:9k/Nfk7fijei+JFj8F7YfyF7fhFHElSmpuFuew9+8f3 email@example.com (RSA)
</span></code></pre>
<p>Locking the database and running <code>ssh-add -l</code> should yield precisely this:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>The agent has no identities.
</span></code></pre>
<p>If this is not the case, consider consulting the Further reading section at
the bottom. There are useful links from the other people using KeePassXC to
their advantage.</p>
<h2 id="prompt-keepassxc-unlock-with-ssh">Prompt KeePassXC unlock with ssh</h2>
<p>When everything is prepared and tested, the actual thing is to implement
the script that gets called <em>before</em> any command employing <code>ssh</code> is run. To
do so, create following two files:</p>
<ul>
<li>Paste the following line into <code>~/.ssh/config</code>:</li>
</ul>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>ProxyCommand $HOME/.ssh/keepassxc-prompt %h %p
</span></code></pre>
<ul>
<li>Last thing, create executable script <code>~/.ssh/keepassxc-prompt</code> referenced
above:</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#65737e;">#!/bin/bash
</span><span>
</span><span style="color:#b48ead;">until </span><span style="color:#bf616a;">ssh-add -l </span><span>&> /dev/null
</span><span style="color:#b48ead;">do
</span><span> </span><span style="color:#96b5b4;">echo </span><span>"</span><span style="color:#a3be8c;">Waiting for agent. Please unlock the database.</span><span>"
</span><span> </span><span style="color:#bf616a;">keepassxc </span><span>&> /dev/null
</span><span> </span><span style="color:#bf616a;">sleep</span><span> 1
</span><span style="color:#b48ead;">done
</span><span>
</span><span style="color:#bf616a;">/usr/bin/nc </span><span>"$</span><span style="color:#bf616a;">1</span><span>" "$</span><span style="color:#bf616a;">2</span><span>"
</span></code></pre>
<p>Done! Running any command that relies on <code>ssh</code> while the database is still
locked, the KeePassXC unlocking dialog is invoked. After unlocking, the
keys are automatically added into the agent and the command succeeds. No
more <code>Permission denied (publickey).</code> for the valid keys!</p>
<p>This is a 30th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
<h2 id="frequently-asked-questions-faq">Frequently Asked Questions (FAQ)</h2>
<p><strong>Why is the <code>keepassxc-prompt</code> script so plain?</strong></p>
<p>It serves as a proof of concept. Modify as needed. One good modification is
to add a timeout.</p>
<p><strong>Isn't <code>ProxyCommand</code> used for SSH forwarding?</strong></p>
<p>Yes, but it appears to be working with this approach without problems.</p>
<p><strong>Why not just alias <code>ssh</code> to the wrapper script?</strong></p>
<p>There are other commands that rely on <code>ssh</code>. With an alias every other
command would not invoke the unlocking prompt, for instance <code>git pull</code>.</p>
<p><strong>Why not replace <code>/usr/bin/ssh</code> with a wrapper script?</strong></p>
<p>Many reasons. It prevents proper upgrades, with something security delicate
as OpenSSH, it is advised to use updated software. Next, it requires on a
single user's file, so it would disable <code>ssh</code> for other users on the
system. Asides, it is plainly ugly.</p>
<p><strong>Can I use D-Bus to detect if the database is locked/unlocked?</strong></p>
<p>Sure. If there are also others keys in the agent that are managed
separately of KeePassXC, simply use this instead of <code>ssh-add -l</code> in the
<code>keepassxc-proxy</code>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">qdbus</span><span> org.keepassxc.KeePassXC.MainWindow /org/freedesktop/secrets/collection/Passwords org.freedesktop.Secret.Collection.Locked
</span></code></pre>
<p><strong>Could <code>/etc/sshrc</code> or, when enabled, <code>~/.ssh/rc</code> be used for this
purpose?</strong></p>
<p>No. These hook files run on the <em>server</em> when the connection is initiated.
For this to work, the hook must be run on the client. The latter also has
to be enabled via <code>PermitUserRC yes</code> in <code>/etc/ssh/sshd_config</code>.</p>
<p><strong>What are other situations where KeePassXC prompts for unlock
automatically?</strong></p>
<ol>
<li>When global Auto-type keyboard shortcut is used</li>
<li>When KeePassXC Freesdesktop.org Secret Service is enabled and a program
needs access</li>
<li>On any
<a href="https://github.com/keepassxreboot/keepassxc-browser">browser plugin</a>
keyboard shortcut, assuming installed and configured properly</li>
</ol>
<blockquote>
<p><strong>Settings</strong> > <strong>Browser Integration</strong></p>
<ul>
<li><input disabled="" type="checkbox" checked=""/>
Enable browser integration
<ul>
<li><input disabled="" type="checkbox" checked=""/>
Request to unlock database if it is locked</li>
</ul>
</li>
</ul>
</blockquote>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://ferrario.me/using-keepassxc-to-manage-ssh-keys/#fn:3">https://ferrario.me/using-keepassxc-to-manage-ssh-keys/#fn:3</a></li>
<li><a href="https://c3pb.de/blog/keepassxc-secrets-service.html">https://c3pb.de/blog/keepassxc-secrets-service.html</a></li>
<li><a href="https://rtfm.co.ua/en/keepass-ssh-keys-passwords-storage-and-decryption-on-linux/">https://rtfm.co.ua/en/keepass-ssh-keys-passwords-storage-and-decryption-on-linux/</a></li>
<li><a href="https://grabski.me/tech,/linux/2020/09/02/automatically-unlock-keepassxc-on-startup-and-after-lock-screen/">https://grabski.me/tech,/linux/2020/09/02/automatically-unlock-keepassxc-on-startup-and-after-lock-screen/</a></li>
<li><a href="https://avaldes.co/2020/01/28/secret-service-keepassxc.html">https://avaldes.co/2020/01/28/secret-service-keepassxc.html</a></li>
<li><a href="https://isamert.net/2018/05/04/automatize-your-logins-with-gnome-keyring-and-optionally-with-keepassxc-.html#keepassxc">https://isamert.net/2018/05/04/automatize-your-logins-with-gnome-keyring-and-optionally-with-keepassxc-.html#keepassxc</a></li>
<li><a href="https://github.com/keepassxreboot/keepassxc/wiki/Using-DBus-with-KeePassXC">https://github.com/keepassxreboot/keepassxc/wiki/Using-DBus-with-KeePassXC</a></li>
<li><a href="https://man.archlinux.org/man/ssh_config.5#ProxyCommand">https://man.archlinux.org/man/ssh_config.5#ProxyCommand</a></li>
<li><a href="https://stackoverflow.com/questions/58187257/how-do-i-run-a-local-command-before-starting-ssh-connection-and-after-ssh-connec">https://stackoverflow.com/questions/58187257/how-do-i-run-a-local-command-before-starting-ssh-connection-and-after-ssh-connec</a></li>
<li><a href="https://unix.stackexchange.com/questions/44307/can-ssh-configs-proxycommand-run-a-local-command-before-connecting-to-a-remote">https://unix.stackexchange.com/questions/44307/can-ssh-configs-proxycommand-run-a-local-command-before-connecting-to-a-remote</a></li>
</ul>
Gnome Shell 40 upgrade2021-04-08T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/gnome-shell-forty-upgrade/<p>Performing my mundane periodic software package upgrade on a laptop I did
not expect anything spectacular to happen. The
<a href="/blog/arch-news-pacman-hook-tip/">Informant</a> did not stopped the upgrade
due to some breaking news. There was no problem with mirrors, key
signatures, cyclic dependencies or incompatible packages. Nothing at all. I
know, I should be grateful that the maintainers did their job pretty well,
so my life wasn't complicated needlessly. And I am.</p>
<p>Nevertheless, not too long ago I was forced to fall back from Gnome Shell
to XFCE due to a bug I have already described
<a href="/blog/solutions-buggy-system-package/">in detail</a> previously. When that
bug got fixed, I got back to Gnome Shell, fixed a
<a href="/blog/hide-blueman-applet-gnome-shell/">small issue</a> and hoped I could use
it again, only to find that now Wi-fi dialog cannot be closed. Since I did
not have time nor nerve to deal with that back then, I went on using XFCE,
since I had it configured already and there were mostly no problems but a
few, but they are a part of XFCE design for at least as long as I remember
it, so generally pretty usable.</p>
<p>So, back to the upgrading process from the beginning, I have shifted my
focus on the packages themselves. Surely, there something interesting
happened. I knew that Gnome Shell 40 should have been this or previous
month, but I was not sure when exactly. When I saw it on the menu I got
thrilled - maybe it will be usable again. And it is a major release, maybe
there will be something interesting in it as a bonus.</p>
<h2 id="first-impressions">First impressions</h2>
<p>Before I have logged in I made sure to look at the official Gnome Shell
"Forty" <a href="https://forty.gnome.org/">introductory page</a> which from the design
perspective is very playful and quite pleasing, outlining the newest
features. As it turned out, version 40 is a pretty damn major release, so I
went in.</p>
<p>After log in, there were some problems. It is worth noting that they would
not been there, had I used the freshly created user. Obviously I did not
want to do that. I decided to try to solve what pops up.</p>
<h3 id="appindicator">AppIndicator</h3>
<p>The tray notification area had no icons. I am used to see
<a href="https://github.com/flameshot-org/flameshot">Flameshot</a> there, but more
importantly, <a href="https://keepassxc.org/">KeePassXC</a>. KeePassXC changes icon
when locked/unlocked, so it is a visual cue. This was solved by installing
<code>gnome-shell-extension-appindicator</code> from the community repository which in
turn removed <code>gnome-shell-extension-appindicator-git</code> from AUR. After
reboot the icons were seated nicely.</p>
<h3 id="keyboard-shortcuts">Keyboard shortcuts</h3>
<p>Next in line, many keyboard shortcuts did not work very well. I decided to
use dconf editor and use <em>revert recursively</em> on all gnome settings, but
this did not work, as the dconf editor kept crashing this way. I then went
to at least restore all keyboard shortcuts manually and this worked, but I
obviously had to redefine them back to my preferred configuration. After
the tedious click-through cycle, all the shortcuts worked well.</p>
<p>I was using vertical workspaces in previous Gnome Shell releases and now
they are gone, but this is more in line with the XFCE workflow I had been
sticking to during previous years, so it required only minor adjustments.
The last thing I had to do was to re-assign Flameshot for a PrtSc button.</p>
<h2 id="final-words">Final words</h2>
<p>Even though I have been using Forty for just under two hours now, I do like
it. Personally, I do not mind the new touchpad features, that might have or
might have not already started Internet flame wars about <em>stolen</em> or
<em>copycat</em> features.</p>
<p>Recent experiences made me focusing on the stability of Gnome Shell more
and luckily there had been no crashes or obvious bugs so far. With the most
apparent issues affecting my setup neatly ironed out, I can focus on it's
visual eye candy and whatnot.</p>
<p>This is a 29th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Feature: task list in Gitea issues2021-04-07T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/feature-task-lists-gitea-issues/<p>There is a feature in Gitea that got me a little puzzled recently.</p>
<p><img src="https://peterbabic.dev/blog/feature-task-lists-gitea-issues/feature-gitea-task-list.png" alt="A task list progress is shown in the issues list" /></p>
<p>At first, I thought it is related to Project Milestones (a Kanban Board in
Gitea). After clicking around at everything I have found, and even removing
all the boards and milestones, the icon with a progress bar persisted.</p>
<p>Later I have found GitHub has a similar functionality, given Gitea tries to
be feature compatible with GitHub, called <strong>task lists</strong>, which are the
checkboxes in the issue description:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>- [x] Finish my edits
</span><span>- [ ] Push my commits to Gitea
</span><span>- [ ] Open a pull request
</span></code></pre>
<p>Users that have repository write permissions then can edit the issue and
check some boxes within Markdown, to adjust the progress bar seen in the
picture above.</p>
<p>The reason I could not find what it is is that there is no title on hover.
I have even tried looking at the HTML source, if there aren't any hints,
usually in a form of CSS class names, but I could not find anything
helpful. Maybe I should open the issue upstream to discuss this, but also,
maybe it is very obvious to everyone except me, I am not sure.</p>
<p>This is a 28th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
How to use flashrom on Archlinux ARM2021-04-06T00:00:00+00:002021-06-19T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/how-use-flashrom-archlinux-arm/<p>By corrupting a bootloader on my router's motherboard I managed to
soft-brick it. Bootloader is usually the first batch of software
instructions, the motherboard executes when powered up. Soft-bricking means
that the device won't boot up, but can be resurrected to life, or
de-bricked.</p>
<p>De-bricking procedure depends on the steps that caused the corruption in
the first place. Replacing the corrupted bootloader instruction with the
correct ones can make the device bootable again, provided nothing else got
damaged.</p>
<p>On the motherboards without too much focus on space constrains, a category
of almost all consumer electronics with the exception of cell phones would
fall, the bootloader program may be stored on the easily identifiable flash
chip. Many such flash chips use the same physical package following the
common pinout. This is convenient for the manufacturer, as hooking up the
chip to a <em>programmer</em> requires just a specialized clips tool and it takes
very little effort to burn the actual firmware into the device. A
programmer in this context is another device that translates the file on a
disk to a low level electronic impulses that the chip can understand - in
this scenario, store the program in the non-volatile memory.</p>
<p>But this set up is not convenient only for the manufacturer. In fact, it is
also convenient for any user determined to modify or upgrade the
functionality of such a device. The motives for this are many. Fixing a
bug, increasing a security of a device, decreasing privacy concerns of a
device or to make device do
<a href="https://www.jbprojects.net/projects/wifirobot/">something completely different</a>.</p>
<h2 id="raspberry-pi-as-a-programmer">Raspberry Pi as a programmer</h2>
<p>Many people these days have a spare Raspberry Pi lying around. And of
course it can be re-purposed as a programmer in many situations, as it
boasts many peripherals, either high-level like Ethernet and Wi-fi to
connect with a computer or low level, such as UART, I2C and SPI to connect
the chip on the other hand.</p>
<p>Chip I was talking with was <code>Winbond W25Q64.V</code>, a 64Mb chip that talks over
SPI, a common part on the market, apart from TP-link routers it can be also
found on ThinkPad notebooks.</p>
<p>Either uncomment
<a href="https://archlinuxarm.org/wiki/Raspberry_Pi">a line in <code>/boot/config.txt</code></a>:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>device_tree_param=spi=on
</span></code></pre>
<p>Or use the
<a href="https://aur.archlinux.org/packages/raspi-config-git/">raspi-config</a> GUI
available from AUR:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> raspi-config
</span></code></pre>
<blockquote>
<p>Interfacing options > SPI > Yes</p>
</blockquote>
<p>Reboot was required on my device:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> systemctl reboot
</span></code></pre>
<p>After reboot, make sure that there is an
output^[<a href="https://www.raspberrypi-spy.co.uk/2014/08/enabling-the-spi-interface-on-the-raspberry-pi/">https://www.raspberrypi-spy.co.uk/2014/08/enabling-the-spi-interface-on-the-raspberry-pi/</a>]:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">lsmod </span><span>| </span><span style="color:#bf616a;">grep</span><span> spi_
</span></code></pre>
<p>The devices located at <code>/dev/spidev*</code> should be available:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">ls</span><span> /dev/spidev*
</span></code></pre>
<p>Raspberry Pi 3 offers two hardware SPI ports, SPI0 as <code>spidev0.0</code> and SPI1
as <code>spidev0.1</code>.</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>crw------- 1 root root 153, 0 Mar 15 16:07 /dev/spidev0.0
</span><span>crw------- 1 root root 153, 1 Mar 15 16:07 /dev/spidev0.1
</span></code></pre>
<p>Note that the ports are in a <code>root</code> group, they will require permissions to
use.</p>
<h2 id="wiring">Wiring</h2>
<p>A wiring table for the flash chip used. For the SOIC-8 chip flash chip,
this table can be reused as many such chips are pin compatible. Most guides
including this one assume SPI0 is used.</p>
<table><thead><tr><th>Flash Pin</th><th>Flash Meaning</th><th>Pi SPI0</th><th>Pi SPI1</th><th>Pi Meaning</th></tr></thead><tbody>
<tr><td>1</td><td>CS</td><td>24</td><td>36</td><td>CS0</td></tr>
<tr><td>2</td><td>DO</td><td>21</td><td>35</td><td>MISO</td></tr>
<tr><td>3</td><td>WP</td><td>17</td><td>17</td><td>3V3</td></tr>
<tr><td>4</td><td>GND</td><td>25</td><td>25</td><td>GND</td></tr>
<tr><td>5</td><td>DI</td><td>19</td><td>38</td><td>MOSI</td></tr>
<tr><td>6</td><td>CLK</td><td>23</td><td>40</td><td>SCLK</td></tr>
<tr><td>7</td><td>HOLD</td><td>17</td><td>17</td><td>3V3</td></tr>
<tr><td>8</td><td>VCC</td><td>17</td><td>17</td><td>3V3</td></tr>
</tbody></table>
<h2 id="flashrom-installation">Flashrom installation</h2>
<blockquote>
<p><strong>flashrom</strong> is a utility for identifying, reading, writing, verifying
and erasing flash chips. It is designed to flash
BIOS/EFI/coreboot/firmware/optionROM images on mainboards,
network/graphics/storage controller cards, and various other programmer
devices. ^[<a href="https://flashrom.org/Flashrom">https://flashrom.org/Flashrom</a>]</p>
</blockquote>
<p>flashrom is available in the Arch community repositories:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -S</span><span> flashrom
</span></code></pre>
<p>Verify the installation by running <code>flashrom</code>:</p>
<blockquote>
<p>Please select a programmer with the --programmer parameter. Valid choices
are:<br />dummy, ft2232_spi, serprog, buspirate_spi, dediprog,
developerbox, pony_spi, linux_mtd, <strong>linux_spi</strong>, usbblaster_spi,
pickit2_spi, ch341a_spi, digilent_spi, stlinkv3_spi.</p>
</blockquote>
<p>What will work with Pi is the <code>linux_spi</code> programmer:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">flashrom -p</span><span> linux_spi
</span></code></pre>
<blockquote>
<p>Using clock_gettime for delay loops (clk_id: 1, resolution:
1ns).<br />Using default 2000kHz clock. Use 'spispeed' parameter to
override.<br />No SPI device given. Use <strong>flashrom -p
linux_spi:dev=/dev/spidevX.Y</strong><br />Error: Programmer initialization
failed.</p>
</blockquote>
<p>It is obvious that the tool offers helpful usage tips, the full programmer
parameter looks like so:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">flashrom -p</span><span> linux_spi:dev=/dev/spidev0.0
</span></code></pre>
<blockquote>
<p>Using clock_gettime for delay loops (clk_id: 1, resolution:
1ns).<br />Using default 2000kHz clock. Use 'spispeed' parameter to
override.<br />linux_spi_init: failed to open /dev/spidev0.0:
<strong>Permission denied</strong><br />Error: Programmer initialization failed.</p>
</blockquote>
<h3 id="note-on-permissions">Note on permissions</h3>
<p>On a Debian based Raspberry Pi distribution, adding a user to the <code>spi</code>
group and re-logging in would suffice. On Archlinux ARM there is no such
group by default and the <code>/dev/spidev*</code> are owned by <code>root</code>. To use the SPI
ports, either use <code>sudo flashrom</code> or
<a href="https://archlinuxarm.org/wiki/Raspberry_Pi">alarm udev rules</a>.</p>
<h2 id="flashrom-usage">Flashrom usage</h2>
<p>Probe the chip, making sure the wiring and the setup is correct:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">flashrom -p</span><span> linux_spi:dev=/dev/spidev0.0
</span></code></pre>
<p>If the setup is right, the chip is recognized:</p>
<blockquote>
<p>Found Winbond flash chip "W25Q64.V" (8192 kB, SPI) on linux_spi.</p>
</blockquote>
<p>Sometimes multiple chip definitions are
detected^[<a href="https://openwrt.org/toh/tp-link/archer_mr200#debricking">https://openwrt.org/toh/tp-link/archer_mr200#debricking</a>]:</p>
<blockquote>
<p>Found Macronix flash chip "MX25L6405" (8192 kB, SPI) on linux_spi.<br />
Found Macronix flash chip "MX25L6405D" (8192 kB, SPI) on linux_spi.<br />
Found Macronix flash chip "MX25L6406E/MX25L6408E" (8192 kB, SPI) on
linux_spi.<br /> Found Macronix flash chip
"MX25L6436E/MX25L6445E/MX25L6465E/MX25L6473E/MX25L6473F" (8192 kB, SPI)
on linux_spi.</p>
<p>Multiple flash chip definitions match the detected chip(s): "MX25L6405",
"MX25L6405D", "MX25L6406E/MX25L6408E",
"MX25L6436E/MX25L6445E/MX25L6465E/MX25L6473E/MX25L6473F"<br /> Please
specify which chip definition to use with the -c <chipname> option.<br /></p>
</blockquote>
<p>If this is the case, help it with picking the right chip type, it might be
written on it's physical package:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">flashrom -p</span><span> linux_spi:dev=/dev/spidev0.0</span><span style="color:#bf616a;"> -c</span><span> MX25L6405D
</span></code></pre>
<blockquote>
<p>Found Macronix flash chip "MX25L6405D" (8192 kB, SPI) on linux_spi.:w</p>
</blockquote>
<p>With the formalities sorted out, store the flash contents into file
(provide <code>-c</code> parameter if needed):</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">flashrom -p</span><span> linux_spi:dev=/dev/spidev0.0</span><span style="color:#bf616a;"> -r</span><span> original.bin
</span></code></pre>
<blockquote>
<p>Found Winbond flash chip "W25Q64.V" (8192 kB, SPI) on linux_spi. Reading
flash... done.</p>
</blockquote>
<p><strong>Warning:</strong> The following command rewrites contents of the flash chip
under consideration, potentially bricking/damaging your device.</p>
<p>Write flash contents from the file into the flash:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">flashrom -p</span><span> linux_spi:dev=/dev/spidev0.0</span><span style="color:#bf616a;"> -w</span><span> new.bin
</span></code></pre>
<blockquote>
<p>Found Winbond flash chip "W25Q64.V" (8192 kB, SPI) on
linux_spi.<br />Reading old flash chip contents... done.<br />Erasing and
writing flash chip... Erase/write done.<br />Verifying flash... VERIFIED.</p>
</blockquote>
<p>Given the new file contents is a valid instruction set with a correct data,
possibly a bootloader, the device should now boot, if this was the goal.</p>
<p>This is a 27th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Arch news pacman hook tip2021-04-05T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/arch-news-pacman-hook-tip/<p>Part of the Arch Linux system maintenance is to actively read
<a href="https://archlinux.org/news/">latest news</a>. Mostly anyone who uses Arch has
come across this piece of advice: read the news before updating! In other
words, before running the dreaded <code>sudo pacman -Syu</code> command, one should be
prepared to act upon any breaking changes by making sure to read about them
first.</p>
<p>The sad reality is that people are lazy. I am lazy. Either I forget to read
the news, or I blatantly ignore them, so I have a false right to brag about
the damage that has been don to me on my social networks. Well, maybe
network, I only publish posts in the Fediverse right now. I wish the system
would tell me to read the news before the installation.</p>
<h2 id="hook-types">Hook types</h2>
<p>Of course, there is a way to do this automatically. Arch Linux package
manager, <code>pacman</code> offers a piece of functionality called
<a href="https://wiki.archlinux.org/index.php/Pacman#Hooks">hooks</a>. I have written
about hooks previously when
<a href="/blog/keep-gnome-shell-settings-dotfiles-yadm/">streamlining yadm</a> or
<a href="/blog/prevent-push-when-skipping-cypress-tests/">hacking on Cypress</a> and
even when
<a href="/blog/how-update-gooogle-calendar-pre-push-hook/">automating calendar</a> (I
do not use that anymore). All these were git hooks.</p>
<p>Pacman however is using a different kind of hook, an
<a href="https://man.archlinux.org/man/alpm-hooks.5">alpm hook</a>. The abbreviation
stands for Arch Linux Package Management. Although the documentation looks
very solid, I must admit that this is exactly the type of thing I do not
want to study. From my understanding, I could probably use it only for
exactly this one use case - preventing pacman updating the system before
reading the fresh news. Unless I became an Arch contributor or a maintainer
or something similar. Wish someone did this already.</p>
<h2 id="enter-informant">Enter Informant</h2>
<p>Fortunately, someone already made this working. A pacman hook that prevents
updating, unless user confirmed he read the news. It's called
<a href="https://github.com/bradford-smith94/informant">Informant</a>. I am quite
surprised I did not find about it sooner, the
<a href="https://wiki.archlinux.org/index.php/System_maintenance#Read_before_upgrading_the_system">system maintenance</a>
page about <em>Reading before upgrading the system</em> has been mentioning it
since November 2019. It probably shows something about how little I cared
about that specific paragraph. Better late than never.</p>
<p>Informant is exactly a piece of technology that automates the whole
workflow, saving time and cognitive capacity in the process, which should
somehow be the point of automation I believe. The details about the usage
are described in the project's README. After installing manually or via
<a href="https://aur.archlinux.org/packages/informant/">AUR</a> there is just an
initial read command and after that is basically set it and forget. Very
convenient.</p>
<h2 id="internal-workings">Internal workings</h2>
<p>Hooks are in fact usually quite simple in their nature and maybe I am
scared of them only because of the fear of the unknown. I mean, I have
learned to use git hooks already, and they feel simple now. Curiosity made
me look at the alpm
<a href="https://github.com/bradford-smith94/informant/blob/master/informant.hook">hook</a>
behind Informant.</p>
<pre data-lang="ini" style="background-color:#2b303b;color:#c0c5ce;" class="language-ini "><code class="language-ini" data-lang="ini"><span style="color:#b48ead;">[Trigger]
</span><span style="color:#bf616a;">Operation </span><span>= Install
</span><span style="color:#bf616a;">Operation </span><span>= Upgrade
</span><span style="color:#bf616a;">Type </span><span>= Package
</span><span style="color:#bf616a;">Target </span><span>= *
</span><span>
</span><span style="color:#b48ead;">[Action]
</span><span style="color:#bf616a;">Description </span><span>= Checking Arch News </span><span style="color:#b48ead;">with</span><span> Informant ...
</span><span style="color:#bf616a;">When </span><span>= PreTransaction
</span><span style="color:#bf616a;">Exec </span><span>= /usr/bin/informant check
</span><span style="color:#b48ead;">AbortOnFail
</span></code></pre>
<p>Looking at the hook from this perspective it does not look so mysterious
after all, just a small digestible, self-explanatory pieces. Of course, by
digging deeper, the <code>/usr/bin/informant</code> is actually a Python script doing
the fetching and keeping track of already read news, among other things.
But now that I see the pacman hooks aren't actually that scary, I might
find myself automating something regarding the packages management. Would
definitely love to hear some nice ideas!</p>
<p>This is a 26th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
How to verify integrity of OpenWRT files2021-04-04T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/how-verify-openwrt-integrity-files/<p>The
<a href="https://openwrt.org/docs/guide-user/security/release_signatures?rev=1562202650#verify_download_integrity">OpenWRT page</a>
about download integrity verification describes three steps for a
successful verification of downloaded files:</p>
<ol>
<li>Download the <code>sha256sum</code> and <code>sha256sum.asc</code> files</li>
<li>Check the signature with
<code>gpg --with-fingerprint --verify sha256sum.asc sha256sum</code>, ensure that
the GnuPG command reports a good signature and that the fingerprint
matches the ones listed on our
<a href="https://openwrt.org/docs/guide-user/security/signatures">fingerprints page</a>.</li>
<li>Download the firmware image into the same directory as the sha256sums
file and verify its checksum using the following command:
<code>sha256sum -c --ignore-missing sha256sums</code></li>
</ol>
<p>It has contained a <em>convenience script</em> in the past, but it was removed,
due to being unsafe (check the wiki page history). The steps are left for
the user to follow and seem straightforward but I believe a little
explanation won't hurt. I'll use TP-Link MR200v1 as an example. The aim is
to write commands in a way that can be automated using bash:</p>
<h2 id="download-files">Download files</h2>
<p>A simple <code>wget</code> can serve:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">url</span><span>="</span><span style="color:#a3be8c;">https://downloads.openwrt.org/snapshots/targets/ramips/mt7620</span><span>"
</span><span style="color:#bf616a;">sumsFile</span><span>="</span><span style="color:#a3be8c;">sha256sums</span><span>"
</span><span>
</span><span style="color:#bf616a;">wget </span><span>"$</span><span style="color:#bf616a;">url</span><span style="color:#a3be8c;">/</span><span>$</span><span style="color:#bf616a;">sumsFile</span><span>"
</span><span style="color:#bf616a;">wget </span><span>"$</span><span style="color:#bf616a;">url</span><span style="color:#a3be8c;">/</span><span>$</span><span style="color:#bf616a;">sumsFile</span><span style="color:#a3be8c;">.asc</span><span>"
</span></code></pre>
<p>Files can be deleted before calling <code>wget</code>, because it does <em>filestamping</em>
(adding an incremental number to the end of a filename).</p>
<h2 id="verify-signature">Verify signature</h2>
<p>Verifying signature in an automated manner using bash is where I spent most
of my time. Simply running the offered command is not sufficient:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">gpg --with-fingerprint --verify</span><span> sha256sums.asc sha256sums
</span></code></pre>
<p>Without any previous work, it results into the following error:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>gpg: Can't check signature: No public key
</span></code></pre>
<p>There are numerous ways around the problem, the most automatic is to use
<code>--auto-key-retrieve</code> option:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">gpg --auto-key-retrieve --with-fingerprint --verify</span><span> sha256sums.asc
</span></code></pre>
<p>This option assumes, that the keys are published inside a key server. Also
note that gpg can assume the right filename if it matches with the
signature, so <code>sha256sums</code> can be omitted. The output should look similar
to this:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>gpg: assuming signed data in 'sha256sums'
</span><span>gpg: Signature made Sun 04 Apr 2021 03:07:20 CEST
</span><span>gpg: using RSA key 6D9278A33A9AB3146262DCECF93525A88B699029
</span><span>gpg: Good signature from "LEDE Build System (LEDE GnuPG key for unattended build jobs) <lede-adm@lists.infradead.org>" [unknown]
</span><span>gpg: WARNING: This key is not certified with a trusted signature!
</span><span>gpg: There is no indication that the signature belongs to the owner.
</span><span>Primary key fingerprint: 54CC 7430 7A2C 6DC9 CE61 8269 CD84 BCED 6264 71F1
</span><span> Subkey fingerprint: 6D92 78A3 3A9A B314 6262 DCEC F935 25A8 8B69 9029
</span></code></pre>
<p>Visually the focusing on the word <strong>Good</strong> is the minimal requirement of
the the first part second step:</p>
<blockquote>
<p>ensure that the GnuPG command reports a good signature</p>
</blockquote>
<p>Manually navigating to the
<a href="https://openwrt.org/docs/guide-user/security/signatures">fingerprints page</a>
and <strong>copy-search-paste</strong> in a browser for the actual fingerprint, for
instance <code>54CC 7430 7A2C 6DC9 CE61 8269 CD84 BCED 6264 71F1</code> in this
example is required as the second part of the second step.</p>
<blockquote>
<p>hat the fingerprint matches the ones listed on our fingerprints page</p>
</blockquote>
<h3 id="test-return-status">Test return status</h3>
<p>With a focus on automation, the fact that the <code>gpg --verify</code> option returns
0 if the verification was successful and 1 of not (which is a common
convention), means testing for the status of the
<a href="https://stackoverflow.com/a/5550280/1972509">previous command in bash with <code>$?</code> can be used here</a>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">gpg --auto-key-retrieve --with-fingerprint --verify</span><span> sha256sums.asc
</span><span>
</span><span style="color:#b48ead;">if </span><span style="color:#96b5b4;">[ </span><span>$</span><span style="color:#bf616a;">? -eq</span><span> 0 </span><span style="color:#96b5b4;">]</span><span>; </span><span style="color:#b48ead;">then
</span><span> </span><span style="color:#96b5b4;">echo </span><span>"</span><span style="color:#a3be8c;">SIGNATURE VERIFIED</span><span>"
</span><span style="color:#b48ead;">else
</span><span> </span><span style="color:#96b5b4;">echo </span><span>"</span><span style="color:#a3be8c;">SIGNATURE INVALID, the program will terminate</span><span>"
</span><span> </span><span style="color:#96b5b4;">exit</span><span> 1
</span><span style="color:#b48ead;">fi
</span></code></pre>
<h3 id="fingerprint-parsing">Fingerprint parsing</h3>
<p>Depending on the GnuPG version, the <code>gpg --verify</code> <strong>only prints</strong> the
output to the screen. To parse the output and to do something useful with
it in the script, an
<a href="https://superuser.com/a/497971/440086">option <code>--status-fd 1</code> has to be added as well</a>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">gpg --status-fd</span><span> 1</span><span style="color:#bf616a;"> --auto-key-retrieve --with-fingerprint --verify</span><span> sha256sums.asc
</span></code></pre>
<p>The output becomes formatted differently now:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>[GNUPG:] NEWSIG
</span><span>[GNUPG:] KEY_CONSIDERED 54CC74307A2C6DC9CE618269CD84BCED626471F1 0
</span><span>[GNUPG:] SIG_ID 5XkMbTUScxodW5F+uzE34LmUBk8 2021-04-04 1617498440
</span><span>[GNUPG:] KEY_CONSIDERED 54CC74307A2C6DC9CE618269CD84BCED626471F1 0
</span><span>[GNUPG:] GOODSIG F93525A88B699029 LEDE Build System (LEDE GnuPG key for unattended build jobs) <lede-adm@lists.infradead.org>
</span><span>[GNUPG:] VALIDSIG 6D9278A33A9AB3146262DCECF93525A88B699029 2021-04-04 1617498440 0 4 0 1 10 00 54CC74307A2C6DC9CE618269CD84BCED626471F1
</span><span>[GNUPG:] KEY_CONSIDERED 54CC74307A2C6DC9CE618269CD84BCED626471F1 0
</span><span>[GNUPG:] KEY_CONSIDERED 54CC74307A2C6DC9CE618269CD84BCED626471F1 0
</span><span>[GNUPG:] TRUST_UNDEFINED 0 pgp
</span><span>[GNUPG:] VERIFICATION_COMPLIANCE_MODE 23
</span></code></pre>
<p>An example <code>grep</code> command extracting the
<a href="https://stackoverflow.com/a/59979385/1972509">fingerprint of the primary key</a>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">gpgOutput</span><span>=$</span><span style="color:#a3be8c;">(</span><span style="color:#bf616a;">gpg --status-fd</span><span style="color:#a3be8c;"> 1</span><span style="color:#bf616a;"> --auto-key-retrieve --with-fingerprint --verify</span><span style="color:#a3be8c;"> sha256sums.asc)
</span><span style="color:#bf616a;">fingerprint</span><span>=$</span><span style="color:#a3be8c;">(</span><span style="color:#bf616a;">gpgOutput </span><span>| </span><span style="color:#bf616a;">grep -m1</span><span style="color:#a3be8c;"> KEY_CONSIDERED </span><span>| </span><span style="color:#bf616a;">tr -d</span><span>' '</span><span style="color:#bf616a;"> -f3</span><span style="color:#a3be8c;">)
</span><span style="color:#96b5b4;">echo </span><span>$</span><span style="color:#bf616a;">fingerprint
</span></code></pre>
<p>Checking that the fingerprint matches the one published is the most tricky
part, because it is only printed on the web page with a custom formatting.
Also, it is not foolproof, because should the attackers got hold of the
webserver where the fingerprints are published, they could've substituted
there different ones. This requires some cognitive effort on the user's
part to do the research about what source can be trusted or not, but this
is a topic for another day.</p>
<h3 id="fingerprint-formatting">Fingerprint formatting</h3>
<p>Note that some formatting of the fingerprint usually has to be done here to
match the published shape, for instance
<a href="https://unix.stackexchange.com/a/5981/109352">adding a space every four characters</a>
and adding one another space every 25 to match the user-readable output of
the <code>gpg --verify</code> can be
<a href="https://stackoverflow.com/a/12973694/1972509">done like this</a>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#65737e;"># formats fingerprint 54CC74307A2C6DC9CE618269CD84BCED626471F1
</span><span style="color:#65737e;"># to prettier form of 54CC 7430 7A2C 6DC9 CE61 8269 CD84 BCED 6264 71F1
</span><span style="color:#bf616a;">formatted</span><span>=$</span><span style="color:#a3be8c;">(</span><span style="color:#96b5b4;">echo </span><span>"$</span><span style="color:#bf616a;">fingerprint</span><span>" | </span><span style="color:#bf616a;">sed </span><span>'</span><span style="color:#a3be8c;">s/.\{4\}/& /g</span><span>' | </span><span style="color:#bf616a;">xargs </span><span>| </span><span style="color:#bf616a;">sed </span><span>'</span><span style="color:#a3be8c;">s/.\{25\}/& /g</span><span>'</span><span style="color:#a3be8c;">)
</span></code></pre>
<p>An example checking the presence of a fingerprint on a web page,
<a href="https://askubuntu.com/a/537428/350681">assuming the fingerprints there are legit</a>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">curl -s </span><span>"$</span><span style="color:#bf616a;">fingerprintUrl</span><span>" | </span><span style="color:#bf616a;">grep -o </span><span>"$</span><span style="color:#bf616a;">formatted</span><span>"
</span></code></pre>
<p>The <code>grep</code> command also follows the return status convention, so the same
test with <code>$?</code> can be applied here, providing an automated solution.</p>
<h2 id="checksum-verification">Checksum verification</h2>
<p>The third step. Now that the validity of the checksums is verified, they
can be used for an actual image file verification, meaning that all the
steps before here were just a preparation. But with everything in place,
the steps are straightforward:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sumsFile</span><span>="</span><span style="color:#a3be8c;">sha256sums</span><span>"
</span><span style="color:#bf616a;">url</span><span>="</span><span style="color:#a3be8c;">https://downloads.openwrt.org/snapshots/targets/ramips/mt7620</span><span>"
</span><span style="color:#bf616a;">imageFile</span><span>="</span><span style="color:#a3be8c;">openwrt-imagebuilder-ramips-mt7620.Linux-x86_64.tar.xz</span><span>"
</span><span>
</span><span style="color:#bf616a;">wget </span><span>"$</span><span style="color:#bf616a;">url</span><span style="color:#a3be8c;">/</span><span>$</span><span style="color:#bf616a;">imageFile</span><span>"
</span><span style="color:#bf616a;">sha256sum -c --ignore-missing </span><span>"$</span><span style="color:#bf616a;">sumsFile</span><span>"
</span></code></pre>
<p>The output of the <code>sha256sum</code> tells that the image file (or any other file
which sha256 sum is stored in the sha256sums file) was downloaded intact.</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>openwrt-imagebuilder-ramips-mt7620.Linux-x86_64.tar.xz: OK
</span></code></pre>
<p>The same status test can be employed here as well to automate the script
further. Source files performing the steps are also available in the
<a href="https://github.com/peterbabic/openwrt-mr200/blob/master/make.sh">repository</a>.</p>
<p>This is a 25th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Things to do after installing ansible on Arch2021-04-03T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/things-do-installing-ansible-arch/<p>After installing <code>ansible</code> package I start with listing all the available
roles:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">ansible-galaxy</span><span> list
</span></code></pre>
<p>I am immediately greeted with two warnings:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span># /home/peterbabic/.ansible/roles
</span><span>[WARNING]: - the configured path /usr/share/ansible/roles does not exist.
</span><span>[WARNING]: - the configured path /etc/ansible/roles does not exist.
</span></code></pre>
<h2 id="roles-path">Roles path</h2>
<p>The reason for this is the default <code>roles_path</code> setting in ansible. It is
uncommented in the <code>/etc/ansible/ansible.cfg</code>, but it gives the hint on the
default values:</p>
<pre data-lang="ini" style="background-color:#2b303b;color:#c0c5ce;" class="language-ini "><code class="language-ini" data-lang="ini"><span style="color:#b48ead;">[defaults]
</span><span style="color:#65737e;">#roles_path = ~/.ansible/roles:/usr/share/ansible/roles:/etc/ansible/roles
</span></code></pre>
<p>The easiest way to get rid of the errors would either be to uncomment and
change the value of the <code>roles_path</code> to contain only <code>~/.ansible/roles</code> or
to create the missing directories, but there is a better solution. I like
to keep the
<a href="/blog/keep-gnome-shell-settings-dotfiles-yadm/">dotfiles versioned using yadm</a>
as a git repository. Fortunately, ansible following administrator
conventions is well suited for this. Ansible looks for configuration files
in a following order and uses the first one it encounters:</p>
<ol>
<li><code>ansible.cfg</code> in the current directory</li>
<li><code>.ansible.cfg</code> in the home directory</li>
<li><code>/etc/ansible/ansible.cfg</code> in the filesystem</li>
</ol>
<p>Naturally, the 2nd option is the most suited for the dotfiles management.
To remove the warnings, create the file <code>~/.ansible.cfg</code>:</p>
<pre data-lang="ini" style="background-color:#2b303b;color:#c0c5ce;" class="language-ini "><code class="language-ini" data-lang="ini"><span style="color:#b48ead;">[defaults]
</span><span style="color:#bf616a;">roles_path </span><span>= ~/.ansible/roles
</span></code></pre>
<p>Now the warnings complaining the path does not exist will disappear.</p>
<h2 id="default-inventory-file">Default inventory file</h2>
<p>The next thing is to customize the default inventory file. Peeking back
into the <code>/etc/ansible/ansible.cfg</code> one can find the following:</p>
<pre data-lang="ini" style="background-color:#2b303b;color:#c0c5ce;" class="language-ini "><code class="language-ini" data-lang="ini"><span style="color:#b48ead;">[defaults]
</span><span style="color:#65737e;">#inventory = /etc/ansible/hosts
</span></code></pre>
<p>Similarly, the file <code>/etc/ansible/hosts</code> is also not very suitable for a
version control. This can be changed to a local files inside the
<code>~/.ansible.cfg</code> setting:</p>
<pre data-lang="ini" style="background-color:#2b303b;color:#c0c5ce;" class="language-ini "><code class="language-ini" data-lang="ini"><span style="color:#b48ead;">[defaults]
</span><span style="color:#bf616a;">inventory </span><span>= ~/.ansible_hosts
</span></code></pre>
<p>Depending on the preference, it is possible to use any arbitrary path for
the inventory file inside the home directory, for instance
<code>~/.ansible/hosts</code>. I decided against locating the inventory file inside
<code>~/.ansible/</code> directory due to following reasons:</p>
<ol>
<li>There are currently only auto-generated files inside the <code>~/.ansible/</code>
directory</li>
<li>The <code>ansible.cfg</code> and <code>hosts</code> file reside side by side in
<code>/etc/ansible/</code> by default</li>
</ol>
<p>Putting <code>~/.ansible.cfg</code> and <code>~/.ansible_hosts</code> side by side into a home
folder currently feels the most natural to me.</p>
<h2 id="vim-settings">Vim settings</h2>
<p>I am using vim for my editing, ansible is no exception. Installing
<a href="https://github.com/pearofducks/ansible-vim">ansible-vim</a> plugin by any
preferred method enables syntax highlighting among other things. The
highlighting also works for ansible inventory files, but only files named
<code>hosts</code> are considered. Our file is called <code>.ansible_hosts</code> not <code>hosts</code>, so
the right filetype will not be picked up and thus highlighting will not
work by default.</p>
<p>There are at least two ways to fix this:</p>
<ol>
<li>Add an <code>autocmd</code> setting into the <code>.vimrc</code> file</li>
<li>Add a <code>modeline</code> into the inventory file</li>
</ol>
<p>The first option is documented elsewhere, for instance in the plugin
README. I find <code>vimrc</code> solution inferior because it adds unnecessary lines
into my vim config and only for one single file. I also need to make sure I
update <code>.vimrc</code> should I change the <code>~/.ansible_hosts</code> location in addition
to updating the <code>~/.ansible.cfg</code>. I see the process prone to errors.</p>
<p>The better approach I believe is to include the information about the
filetype in the inventory file itself for vim to pick up. This information
is called <a href="https://vim.fandom.com/wiki/Modeline_magic"><code>modeline</code></a>. It is
usually used to set the right indentation style for the file, but can help
here equally well. Here's how the <code>modeline</code> in the end of the file looks
with the plugin installed:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span># vim: set ft=ansible_hosts:
</span></code></pre>
<p>Or, this if YAML is preferred, both works:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span># vim: set ft=yaml.ansible:
</span></code></pre>
<p>Equally, even without <code>ansible-vim</code> installed, this <code>modeline</code> will make
syntax highlighting work as it was <code>.ini</code> file:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span># vim: set ft=dosini:
</span></code></pre>
<p>As a note of interest, I am used to the notion that hosts file represent
<code>/etc/hosts</code>, which is something different entirely. The plugin is smart
enough to not highlight this file, but still, I find the ansible way for
naming it's inventory files the same to be confusing. Not including the
file extension, because they tried to make sure multiple formats are
supported is also not that common to me.</p>
<h2 id="note-on-dotfiles-management">Note on dotfiles management</h2>
<p>Do not forget to add both files into the dotfiles manager. I have seen
fellow users promoting <code>chezmoi</code> lately, but I use <code>yadm</code> and I am happy
about it so far.</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">yadm</span><span> add </span><span style="color:#bf616a;">~</span><span>/.ansible.cfg
</span><span style="color:#bf616a;">yadm</span><span> add </span><span style="color:#bf616a;">~</span><span>/.ansible_hosts
</span><span style="color:#bf616a;">yadm</span><span> commit
</span><span style="color:#bf616a;">yadm</span><span> push
</span></code></pre>
<p>As a <strong>security note</strong>, I think an inventory file should <strong>not be public</strong>
in an exposed dotfiles repository and different approach should be
considered, for example the <code>hosts</code> file in a private repository.</p>
<h2 id="summary">Summary</h2>
<p>After following these steps, it is possible to use ansible roles without
unnecessary warnings, having the syntax highlighting enabled and not
loosing track of the changes in the files. The steps in order:</p>
<ol>
<li>Configure <code>ansible-galaxy</code> roles location</li>
<li>Use an inventory file stored in a home directory</li>
<li>Set-up syntax highlighting with <code>ansible-vim</code></li>
<li>Track the files in the dotfiles repository</li>
</ol>
<p>For the completeness, an example of with a vim <code>modeline</code>
<code>~/.ansible_hosts</code>:</p>
<pre data-lang="ini" style="background-color:#2b303b;color:#c0c5ce;" class="language-ini "><code class="language-ini" data-lang="ini"><span style="color:#b48ead;">[webserver]
</span><span style="color:#bf616a;">example</span><span>.com
</span><span>
</span><span style="color:#65737e;"># vim: set ft=ansible_hosts:
</span></code></pre>
<p>And the local ansible configuration file <code>~/.ansible.cfg</code>:</p>
<pre data-lang="ini" style="background-color:#2b303b;color:#c0c5ce;" class="language-ini "><code class="language-ini" data-lang="ini"><span style="color:#b48ead;">[defaults]
</span><span style="color:#bf616a;">roles_path </span><span>= ~/.ansible/roles
</span><span style="color:#bf616a;">inventory </span><span>= ~/.ansible_hosts
</span></code></pre>
<p>Note that the <code>roles_path</code> represents a <em>directory</em> while the <code>inventory</code>
represents a <em>file</em>.</p>
<p>This is a 24th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://spacelift.io/blog/ansible-roles">https://spacelift.io/blog/ansible-roles</a></li>
</ul>
Status update April 20212021-04-02T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/status-update-april-2021/<p>Feeling little overwhelmed lately. There are many projects and activities I
have started and none seem to get even near finishing. I have pushed myself
to finish the <a href="https://github.com/peterbabic/triangles.fun">triangles.fun</a>
so I have one less thing to worry about, but too many people experienced
the top row cutoff. This is happening because of the hacky approach I have
used, using Tailwind's breakpoints with CSS scale transform. Will need to
fix this once more.</p>
<p>There are other problems and tasks demanding my time and attention going
on:</p>
<ul>
<li>Freelance work, I am thankful for this</li>
<li>Trying to set up a recently purchased Contabo VPS by learning <code>ansible</code></li>
<li>Because of the previous point, I still did not install Peertube there</li>
<li>There is no official way to set up Gitea with Drone on the same VPS</li>
<li>Sapper, currently powering this blog doesn't go well with TailwindCSS</li>
<li>SvelteKit could replace Sapper and enable TailwindCSS is still in beta</li>
<li>Pleroma updates are very painful to me since the beginning</li>
<li>My industrial automation course with a beautiful SVG images is completely
on hold again</li>
</ul>
<p>Hopefully there will be some major advancements in at least some of these
areas soon. I believe once the single one of the starts to give, other will
follow, but maybe I am too optimistic.</p>
<p>This is a 23th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
The most useful computer mouse2021-04-01T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/the-most-useful-computer-mouse/<p>It looks like I use this as a daily log and I think I like it. My today's
contribution would be Logitech MX Master 3.</p>
<p>Before I purchased this mouse, I have been using Dell WM524. It was my
first Bluetooth mouse and I have been using far longer than the
manufacturer intended it to. What I mean, I have replaced the mechanical
button on it by de-soldering it and soldering a new one, because the click
became unreliable. I tend to do this until the encoder in the scroll wheel
become unreliable. This is a signal I need to throw the mouse away, because
I did not found a way to replace the encoder yet.</p>
<p>With WM524, the encoder is still working. I really enjoyed it's Bluetooth
nature, as I do not really like those mouse dongles. They block the USB
port and also prevent notebook being placed in the bag gracefully. The
Bluetooth mouse uses no dongle. But the mouse has other problems. It is
quite portable, but this means it is far less ergonomic. What is worse, if
you need to dual-boot for any reason, it becomes a nightmare with a pure
Bluetooth mouse. There are ways to make the Bluetooth pairing work with
both operating systems, but they are all quite hacky, at least as far as I
can tell. The problem is, once the mouse is paired, the OS stores the data
about the mouse and the mouse stored the pairing data about the OS. If you
want to use the mouse on multiple systems, you have to overcome this by
either unifying the data stored in the mouse, or what is usually simpler,
the OS data.</p>
<p>The WM524 is special in a sense, that it has a <em>control</em> button on both its
sides. This is quite different compared to most other mouses on the market,
that has two buttons around the thumb are, that unless remapped, use as a
<em>forward</em> and <em>backward</em> navigation buttons (it is also not standardized,
so some vendors switch the two, causing havoc).</p>
<h2 id="a-hybrid-approach">A hybrid approach</h2>
<p>The Logitech MX Master 3 is offering a different, hybrid approach to
connectivity. It offers not a single, but <em>two</em> Bluetooth channels. This
makes dual-booting a breeze. It requires just pushing a small button on the
bottom of the mouse, no hacks required.</p>
<p>What's more, it offers a third channel, that is mapped to the USB dongle.
This is a special USB dongle called <em>Unifying Receiver</em> as it can handle
multiple Logitech devices, for instance a keyboard and a mouse, so no need
for multiple dongles for multiple wireless gadgets.</p>
<p>I know I have already stated that I am not a fan of the dongles, but this
is a different scenario entirely. I am not a fan of the mouses that rely
<em>only</em> on the dongle or <em>only</em> on the Bluetooth. MX Master 3 (and also
other previous models) can be connected by <em>either</em> of the two, also
providing a possibility to pair not one but two different computers (or
operating systems on the same computer for that matter) in the same
package.</p>
<p>The dongle here shines in two scenarios. First, I own the notebook dock. It
is a basic dock for ThinkPad notebooks. But no matter the dock type, it's
purpose is to stay on the desk. The dock is a great place for the dongle.
Every time I undock the notebook, I place it in the bag. Naturally, I take
the mouse with it, so a little bit of button pushing is just a habit and
the dongle does not pose a hindrance for the bag, because it stays plugged
in the dock. I will use the Bluetooth mode on the go. The reason to use the
dongle on the dock as opposed to using the Bluetooth all the time is that
it takes a few milliseconds for the mouse to get back from the sleep. It is
not a big deal, but it is not present on the dongle mode.</p>
<p>The second reason where the dongle shines, and this is what inspired me to
write this post, is on the devices that have no easy way to set up the
Bluetooth. These are usually the Raspberry Pi type of devices, where unless
you are really a command line ninja, you need a USB mouse to configure the
Bluetooth (provided the device even has a Bluetooth and also that the
desktop environment with a Bluetooth functionality is enabled). Today, I
had to use a mouse for a few minutes on the industrial version of the
Raspberry Pi I use, the Revolution Pi (or RevPi for short) and all it took
was to move the dongle from the dock to the RevPi. The notebook has a
touchpad, for the emergencies, so I am really glad I do not need any other
mouse for situations like these. I remember, I had to carry an USB mouse
(either a dongle or a cable version around). Imagine I only had the WM524,
I would be almost screwed up.</p>
<p>There are many other nice features of the MX Master 3 that I like, which I
will not got into today, as I really wanted to focus on the connectivity
side of the thing. It is a little bit pricey, but I do not regret a single
penny. It I simply a great product that makes my life easier daily.</p>
<p>This is a 22th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
VPS opinion: Contabo2021-03-31T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/vps-opinion-contabo/<p>Linode was my first choice as a VPS for at least 8 years by now. Back then
when I started purchasing virtual server instances there weren't so many
choices as there are today. Today, it looks like everyone who does
something remotely associated with computers offers Virtual Private Servers
(VPS), or just <em>cloud</em> if they do not want to bother the customer with the
technical mumbo-jumbo too much. Yeah, just call everything <em>cloud</em>, it will
definitely cause <em>less</em> confusion, whatever.</p>
<p>I mean, 8 years ago is no that long, but the industry moves still faster
and the rapid acceleration was started by the social distancing and the
massive demand for the digital content it brought in. I have also had an
opportunity to manage some instances over on the Azure. Sure, it is a giant
backed by a huge company. But I did not like it. There were simply too many
switches and icons for my taste. Maybe there already was or at least now is
a command line interface for Azure that would allow me to spin up a
storage, an instance, a container or anything else related, but I have used
Azure three years ago and I remember I had to do many clicks for
everything, the classic Microsoft-esque way. Also, the Windows Subsystem
for Linux (WSL) was not considered a stable release back then, so there
weren't too much opportunities for Microsoft to promote CLI tools. As a
side note, the sable release was announced in 2019 as WLS2.</p>
<p>The other aspect of the Azure is the price. It is not cheap at all, quite
the opposite. Sure, there are customers that are fixated to the biggest
vendors on the market, be it software or hardware. I have natural aversion
against Microsoft for a very long time, so my view on the subject is
definitely skewed really bad. Take this with a (rather big) grain of salt.
My option on Azure persists as a product that many use but not many enjoy
using. Still, if you are in the game for the money, this is definitely one
of the worthwhile investments.</p>
<h2 id="enter-contabo">Enter Contabo</h2>
<p>I have stumbled upon the Contabo VPS provider in the
<a href="https://wiki.archlinux.org/index.php/Arch_Linux_on_a_VPS#Providers_that_offer_Arch_Linux">Arch wiki VPS page</a>,
mentioning it as the <strong>only place to get 400GB for 6 EUR</strong>, the statement
that immediately caught my attention. The reasons were twofold. First, I am
running Linode for personal projects and the Linode for 5 EUR offers 25GB
of storage. It is not too little, but also nothing really to brag about.
Also, the storage there is the limit a keep hitting the most, not the RAM
or CPU power. Secondly, I am considering running a
<a href="https://github.com/Chocobozzz/PeerTube">Peertube</a> instance and videos take
considerable amount of storage.</p>
<p>My experience with the Contabo so far is quite mixed. From the technical
point of view, spinning up an Arch instance (I use Arch btw) really took
just a few clicks during the ordering phase. The admin interface that
greets me is a little bit rusty and does not provide many features, but
also everything important is present, including multi-factor authentication
(2FA). It reminds me of Namesilo - the domain registrar.</p>
<p>The pricing has a quirk that is clearly presented on the ordering page, but
still present. I am required to pay for the initialization the price
roughly equivalent of the monthly running fee. This means the experience is
quite different from the Azure or Linode - you can spin new instances on
demand, without initialization cost there, which promotes experimenting. On
Contabo, one should either have a plan in mind or a deeper pocket.</p>
<p>The monthly prices are very welcome on the other hand. I am not sure how
there could be so much more RAM, CPU and storage available, and users on
<a href="https://www.reddit.com/r/webhosting/comments/i0x0sz/what_is_your_experience_with_contabo_theyre/">Reddit</a>
speculate that they could be possibly overselling, because the math does
not add up too much. But the service is definitely working for a hobby /
non-critical projects for many people.</p>
<p>The affiliate program the offer is something I cannot wrap my head around.
I have tried to join multiple times, but there is simply too much work just
to be enrolled. It requires to fill up multiple forms, each consisting of a
plethora steps and questions. For now, I have given up. But the rewards
they claim to offer for promoting them successfully are still tingling
somewhere in the back of my head, so I might give it a try once more.</p>
<p>The last thing is a customer service. I had an invoicing question: Can I
fill my VAT ID somewhere? As it turned out, it is not readily possible. The
option they offer is to create a separate, business account (it is possible
to choose either personal or business account during registration). This
requires using a different email address. Also, every email in the support
thread is answered by a different person. To their defense, the responses
come back quite fast, even during the evenings or weekends, so not too much
problems here.</p>
<p>Yet the fact that there is no possibility to add/change/remove VAT ID in
the administration interface really bothers me, as it is quite a
commonplace these days. For instance Linode and Porkbun (another domain
registrar) have this option nicely integrated. Contabo replied, that they
see this as a "change of contact", yet I do not really understood what that
meant, so keep this detail in mind.</p>
<p>To defend them further, they offered me a way to switch to the business
account for once time only by signing a paper. Not too convenient, but I
must say they really tried to be as helpful as possible to my wishes. I
would not try to ask them for this option too often. The same probably
holds for asking a refund. It is not an automated process there.</p>
<h2 id="conclusion">Conclusion</h2>
<p>I would recommend Contabo for non-critical projects. Saves some bucks every
month.</p>
<p>This is a 21th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Why I voted for support rms letter2021-03-30T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/why-voted-for-dupport-rms/<p>Richard Matthew Stallman (rms) is an iconic figure. He has returned to the
Free Software Foundation (FSF) board of directors this month. The act
created quite a stir in the community revolving around free software
movement.</p>
<p>As all my recent articles, this one is also subjective. For now, I have
found out that producing an article a day is easier if they are somewhat
personal. Here's why I have voted and why have I voted the way I did.</p>
<h2 id="the-vote">The vote</h2>
<p>Very soon after rms has returned to the board, the vote demanding his
departure was started on the GitHub under the name <code>rms-open-letter</code> ,
which was signed by no only individuals, but the organizations such as Open
Source Collective (the umbrella organization for Open Collective), GNOME
Foundation, Creative Commons or Framasoft (the company supporting PeerTube)
to name a few. Clearly, people were concerned.</p>
<p>Soon after, the <code>rms-support-letter</code> was published on GitHub as well. At
the time of writing supporters outnumber the individuals who are against by
the ratio of 3:2. But, there are no small organizations signed under the
support letter, which means there are definitely no big organizations
signed under there either. This makes it hard to do a somewhat objective
comparison of the both, but this is not the point at all.</p>
<h2 id="mixed-reactions">Mixed reactions</h2>
<p>It would be hard for me to do a fair summary of Stallman's overall
contributions to the society (or subtractions for that matter), and
<a href="https://www.oreilly.com/openbook/freedom/">other people</a> did better job
already. Asking people
<a href="https://babic.dev/notice/A5e2fNmFfZHFvTqB6G">around Fediverse</a> provided
mixed reactions.</p>
<p>User matrixsasuke just claimed he voted for support. Contrary to it, the
user dmoonfire was against, due to previous personal interactions regarding
emacs maintenance that did not met the agreement on the both sides.
Although I do not use emacs, I do tools developed under GNU daily, yet I
had no personal interaction with rms so far. Maybe I have to try harder.
The personal interactions with him cannot be that hard, because user neil
claimed it to be the reason he signed under the support letter.</p>
<h2 id="my-stance">My stance</h2>
<p>The history shows that it is not that uncommon to leave the organization
while being the core of it and later return back. I did not pay too much
attention to what's happening around the Stallman's persona, but I use
(spelled properly) GNU/Linux since I am 8 or so, without giving much
anything in return.</p>
<p>I have too decided to sign under to support letter, probably due to
compassion. I am sure by now that I am not the most empathetic person on
the planet, but I do feel some pain when I believe someone is deeply
misunderstood, which is what I believe is happening here.</p>
<p>Maybe some time in the future someone digs this signature out on me, like
if I left some old Facebook post that should not be published in the first
place still there. Yeah, people Google others when they apply for work for
instance. I wanted to do a good thing, and although I do not belove it will
have any measurable impact on the Stallman's life or the movement or the
community whatsoever, I wanted to feel like a hacker for a bit. Yeah, that
reminds me that I am looking forward to the Steven Levy's book.</p>
<p>This is a 20th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://sterling-archermedes.github.io/">https://sterling-archermedes.github.io/</a></li>
<li><a href="https://selamjie.medium.com/remove-richard-stallman-appendix-a-a7e41e784f88">https://selamjie.medium.com/remove-richard-stallman-appendix-a-a7e41e784f88</a></li>
<li><a href="https://r0ml.medium.com/free-software-an-idea-whose-time-has-passed-6570c1d8218a">https://r0ml.medium.com/free-software-an-idea-whose-time-has-passed-6570c1d8218a</a></li>
</ul>
Automotive chip famine events2021-03-29T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/automotive-chip-disruption-events/<p>For the country where I live, Slovakia, automotive industry is a pretty
significant sector. Since 2007, Slovakia has been
<a href="https://en.wikipedia.org/wiki/List_of_countries_by_motor_vehicle_production">the world's largest producer of cars per capita</a>,
<a href="https://automagazin.sk/2014/01/27/priemysel-ktory-obisla-kriza/">amounting to 12% of the Slovakia's GDP</a>.</p>
<p>Before the COVID-19 pandemic started, my professional background was in
industrial automation. The company that employed me operated excursively on
the local market. Although I am currently not focusing primarily on the
automotive, I still keep an eye on the topic, because disruptions in the
industry have serious consequences on the economy of the country where I
live, which means it could also affect my future as a web developer /
DevOps engineer in the local market. Furthermore, I keep electronics as a
hobby, thus reading about the chip producers is also part of what I do.</p>
<p>The events that unfolded recently had a profound effect on the global
supply chain. Specifically, automakers have difficulties supplying
electronic chips for the cars they produce. My compilation of events that
affected the automotive chip shortage, or <em>chip famine</em> so far:</p>
<ul>
<li><a href="https://europe.autonews.com/automakers/vw-blames-suppliers-microchip-shortages">Reduced chip supply</a>
due to demand for consumer electronics amid COVID-19 in January</li>
<li><a href="https://edition.cnn.com/2021/02/16/business/germany-border-checks-manufacturing/index.html">Border checks imposed on drivers</a>
to reduce COVID-19 spread in early February</li>
<li><a href="https://www.nbcnews.com/business/autos/chips-seating-foam-plastics-parts-shortages-continue-cripple-auto-industry-n1261773">Freezing weather in Texas</a>
in late February - early March</li>
<li><a href="https://asia.nikkei.com/Business/Tech/Semiconductors/Renesas-expects-bigger-damage-from-fire-at-its-chip-factory">Fire in Renesas factory</a>
damaging 17 machines in March</li>
<li><a href="https://www.ft.com/content/37d9dc66-e4ee-4629-b791-af4d043ff0ff">Suez canal obstruction in late March </a></li>
<li>Taiwan
<a href="https://www.abc.net.au/news/science/2021-03-26/computer-chips-what-the-global-shortage-means-for-you/100027500">water rationing in April</a></li>
</ul>
<p>I have tried to make sure that all of the above to be relevant and somehow
affected the supply chain in the Slovakia, although this is not an
extensive research and without detailed data it is short of impossible. The
references I have included point mostly to the news channels, and some
could be behind a paywall, so not the best option. Hopefully the things
start to get back to normal rather sooner than later.</p>
<h2 id="update-consumer-electronics">Update: consumer electronics</h2>
<p>As a side note, one of the major consumer electronics retail chain in
Slovakia. Nay.sk announced that they suffer from
<a href="https://e.dennikn.sk/2334716/nedostatok-notebookov-este-bude-pokracovat-nie-je-to-len-o-chybajucich-cipoch-hovori-sef-nakupu-v-nay/">lack of personal notebooks on their shelves</a>.
It looks like this all will have further implications.</p>
<p>This is a 19th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Hide blueman-applet in Gnome Shell2021-03-28T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/hide-blueman-applet-gnome-shell/<p>In my previous <a href="/blog/solutions-buggy-system-package/">post</a>, I have
described why I have installed XFCE alongside my daily driver, which
currently is Gnome Shell. XFCE required me to install and run <code>nm-applet</code>
and <code>blueman-aplet</code> during startup. Gnome Shell does not need these
packages. Network and Bluetooth functionality is included.</p>
<p>The problem is that when <code>blueman-applet</code> is present within Gnome Shell and
<code>KStatusNotifierItem/AppIndicator Support</code> is enabled, Bluetooth icon
appears in the tray, which is undesirable.</p>
<p>To hide the Bluetooth icon from the tray, first
<a href="https://bbs.archlinux.org/viewtopic.php?id=210844">copy the config file</a>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">cp</span><span> /etc/xdg/autostart/blueman.desktop </span><span style="color:#bf616a;">~</span><span>/.config/autostart
</span></code></pre>
<p>Then insert <code>NotShowIn=GNOME;</code>
<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1432555">into that file</a>. Note
that the setting appear to be case-sensitive, so make sure that the
spelling is right.</p>
<p>This is a 18th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Solutions to a buggy system package2021-03-27T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/solutions-buggy-system-package/<p>There was bug in <code>mutter</code>, a default Gnome Shell compositor for Wayland.
There was a bug fixed in <code>v3.38.4</code> fixed via
<a href="https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1748">!1784</a> that
made Gnome Shell crash certain click event (like every ~5 minutes of usage
or so).</p>
<p>When I have found out I cannot use my system like this anymore, I basically
had these options until fix was released:</p>
<ol>
<li>Use the proposed
<a href="https://gitlab.gnome.org/GNOME/mutter/-/issues/1668#note_1046826">workaround</a></li>
<li>Downgrade <code>mutter</code> and related packages</li>
<li>Restoring a snapshot</li>
<li>Switch to different desktop environment</li>
</ol>
<p>Let's break down all the steps to smaller pieces.</p>
<h2 id="using-a-workaround">Using a workaround</h2>
<p>The proposed workaround was to edit a CSS file I could not immediately find
in my system. If I dug deep enough, I could probably make it work. The
problems with workaround is their documentation.</p>
<p>Suppose I just did it and then forgot about it. There could be a package in
the future that would interfere with it and it would be very hard to debug.
The most basic solution is to at least put the file into version control.</p>
<p>This file was about to be located in the <code>$HOME</code> repository, so it would be
included among the dotfiles. I have already written a
<a href="/blog/keep-gnome-shell-settings-dotfiles-yadm/">post about my</a> dotfiles
management system. This would at least document the change, but I would
still need to remember that the change was made and revert it back during
the system upgrade. It would be possible to automate this with the system
similar to <a href="https://github.com/bradford-smith94/informant">informant</a>,
utilizing pacman hooks to prevent an action (a system upgrade) unless other
action (remove the workaround) was performed.</p>
<h2 id="downgrading-related-packages">Downgrading related packages</h2>
<p>Downgrading is usually easy, for instance in Arch Linux, there is a
<a href="https://aur.archlinux.org/packages/downgrader-git/">downgrader-git</a> AUR
package for this purpose. Things like a window manager and a compositor
library, which <code>mutter</code> is are usually more tightly coupled to the system
and require precise version matching. I somehow did not test this, but
downgrading could certainly work here well.</p>
<h2 id="using-a-snapshot">Using a snapshot</h2>
<p>Snapshotting is a technique where all files are "frozen" in time and can be
collectively recalled, should the demand arise. It is worth creating a
snapshot before a system upgrade. If there are packages that contain bugs,
restoring the system to that snapshot make the system stable again. A time
machine of sorts. After the snapshot, all system packages can be upgraded
again, with the exception of the buggy package.</p>
<p>I do a <code>rsync</code> snapshot of my system weekly to the Raspberry Pi. It is a
cron job that makes rsync snapshot to all my systems. Unfortunately, or
rather fortunately, I was not forced to use a restore yet. I should also
consider to set up a local snapshotting mechanism. But, everything is
better than nothing.</p>
<h2 id="switching-to-a-different-package">Switching to a different package</h2>
<p>The last solution I came up with is switching to a different software
package altogether. It is usually not the best option, as there are no
identical software packages with a different name - otherwise what would be
the point. This means the feature sets of two distinct packages is
different, even when task they perform is identical.</p>
<p>Due to open-source nature of a Linux on a desktop, there is a huge pile of
desktop environments. The phenomenon is called
<a href="https://en.wikipedia.org/wiki/Criticism_of_desktop_Linux#Choice_and_fragmentation">fragmentation</a>.
Getting into advantages and disadvantages of this outcome could fill an
entire book, so I have left that discussion out for now. Focusing on the
positive side, I could choose a different package (or a package
<a href="https://wiki.archlinux.org/index.php/Meta_package_and_package_group#Groups">group</a>
to be more precise) and get similar output.</p>
<p>This is what I did. I have installed XFCE alongside Gnome Shell. It
requires more packages to be present on the system, so the updates are also
larger. I have been using XFCE on my previous laptops, for instance on
ThinkPad T400, which I still keep, despite it's age (it is hard to
destroy). As a side note, it is worth to have two working desktop
environments set up and their configuration dotfiles version controlled. If
there is another bug of a similar nature, it should not be hard to fall
back on it.</p>
<p>This is a 17th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Inverting colors helps Tesseract2021-03-26T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/inverting-colors-helps-tesseract/<p>Been doing some work with OCR automation using Tesseract and discovered
that it is really helpful to invert the image before doing the character
recognition. Especially on black surfaces with the laser engraved
characters. This quite makes sense, since the background is already
inverted from black to white. Thought it might be worth sharing.</p>
<p>As a side note, this is my 16th post in a row. I did not felt like writing
too much today, so I decided to at least I show up. Keep up!</p>
<p>This is a 16th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Rules in the Fediverse2021-03-25T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/rules-in-the-fediverse/<p>There are many ways people communicate and share ideas. Or memes. Or anime
girls. No matter what you are a fan of, there is some place in the Internet
for it. I am almost sure.</p>
<p>What's more interesting, is the fact, that due to abundance of software
contributions, it is becoming easier to start a place in the Internet for
people to talk about a given niche, or, even more important, communicate
under the clear set of rules <em>they</em> all agree on.</p>
<p>Why are clear rules important here? Isn't a point of creating a custom
space to avoid the rules? That is the point - avoiding the rules enforced
by the <em>other</em> guys. But rules for communication are still important, it
keeps people adhering to a given standard, generally gluing them together.</p>
<h2 id="fediverse-instance-rules">Fediverse instance rules</h2>
<p>Many Fediverse instances publish moderation rules for its members to
follow, like <strong>cat posts only</strong> or more realistically, <strong>no extremist
content</strong>. This is similar to Reddit's subredits and if speaking strictly
about the microblogging Fediverse platforms, like Mastodon or Pleroma, it's
dissimilar to the mainstream player in this area, Twitter.</p>
<p>Wait, isn't Twitter's Terms of Service (ToS) the rules there? Well, no.
There are not moderation rules describing what Twitter users should or
should not post about. It's a binding legal contact. For Twitter, data
retention is a norm. It's what's happen with the data further why this
document is so long. Many Fediverse instances also publish their ToS, and
they tend to be easier to read and understand, usually concerning
data-retention policy. Yeah, at least for now, describing if your data are
even kept on the server for a period of time and not deleted is what is
discussed and it stops there.</p>
<h2 id="unpredictable-rules">Unpredictable rules</h2>
<p>So the moderation rules and the ToS are the <em>predictable</em> rules. Since
Twitter allows everyone publish everything, does it have any moderation
rules? Yes it does. But they are <em>unpredictable</em>. You never know what does
Twitter displays to whom, or even when and it is not publicly described
anywhere. I mean, managing that many users has to work in some way, and for
many, it is the way they enjoy, or at least accept.</p>
<p>Then there are others who like the predictability of the outcomes of their
posting habits. One of the solution these people chose is to either join
the Fediverse instance or create their own.</p>
<h2 id="single-user-instances">Single-user instances</h2>
<p>It is less common with Mastodon due to it's higher memory footprint, but
more common with lighter implementations to be run as a single-user
instance. It means there could be no no moderation rules nor Terms of
Service and it could still run just fine, due to the fact that instances
<em>federate</em> together, which means they talk to each other. This is where the
Fediverse got its name.</p>
<p>The ToS do only apply on the users that are only members of the instance
the user has joined. However, what happens when a user from a another
instance regularly posts content that does not follow local moderation
rules? Well, they might not even be aware of the fact that they are posting
something that breaks someone else's rules. They would have to check every
single instance rules and find the intersection.</p>
<p>It is not a problem, the administration of the instance an simply block
users or entire instances and the content won't be shown locally. It
requires manual work, but it is completely predictable.</p>
<p>It is fascinating me that the existence of the single-user instances proofs
that the communication can be effective even without enforcing ToS or
moderation rules to anyone. Without even writing them. It feels very
simplistic to me.</p>
<p>The handle on my single-user instance is <a href="https://babic.dev/peter">@peter</a>,
feel free to federate with me.</p>
<p>This is a 15th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Accessing Gitea Postgres inside Docker2021-03-24T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/accessing-gitea-postgres-inside-docker/<p>The gitea issue <a href="https://github.com/go-gitea/gitea/issues/5917">#5917</a>
discusses how to make multiple users <em>unwatch</em> a repository. It has
inspired to write the steps down, as it was not entirely obvious to me.</p>
<p>There was a change introduced via PR
<a href="https://github.com/go-gitea/gitea/pull/5852">#5852</a> released that added an
option <code>AUTO_WATCH_NEW_REPOS</code> into the Gitea config file, but it's default
is <code>true</code>.</p>
<p>The consequence of this behavior is, that unless you are running a Gitea at
least version <strong>1.8.0</strong>, where this config option was introduced and
subsequently you have set the <code>AUTO_WATCH_NEW_REPOS</code> to false beforehand,
creating a repository (presumably in an Organization) and assigning a team
to it makes all the users in that team watch the repository, which creates
a lot of noise for the users.</p>
<p>This guide shows how to reduce this noise, it can be adapted for other
purposes that require raw SQL commands to be run on PostgreSQL inside
Docker.</p>
<ul>
<li>The guide assumes the <code>docker-compose.yml</code> file is identical to the
<a href="https://docs.gitea.io/en-us/install-with-docker/#postgresql-database">Gitea docs</a></li>
</ul>
<pre data-lang="diff" style="background-color:#2b303b;color:#c0c5ce;" class="language-diff "><code class="language-diff" data-lang="diff"><span>version: "3"
</span><span>
</span><span>networks:
</span><span> gitea:
</span><span> external: false
</span><span>
</span><span>services:
</span><span> server:
</span><span> image: gitea/gitea:1.13.3
</span><span> container_name: gitea
</span><span> environment:
</span><span> - USER_UID=1000
</span><span> - USER_GID=1000
</span><span style="color:#a3be8c;">+ - DB_TYPE=postgres
</span><span style="color:#a3be8c;">+ - DB_HOST=db:5432
</span><span style="color:#a3be8c;">+ - DB_NAME=gitea
</span><span style="color:#a3be8c;">+ - DB_USER=gitea
</span><span style="color:#a3be8c;">+ - DB_PASSWD=gitea
</span><span> restart: always
</span><span> networks:
</span><span> - gitea
</span><span> volumes:
</span><span> - ./gitea:/data
</span><span> - /etc/timezone:/etc/timezone:ro
</span><span> - /etc/localtime:/etc/localtime:ro
</span><span> ports:
</span><span> - "3000:3000"
</span><span> - "222:22"
</span><span style="color:#a3be8c;">+ depends_on:
</span><span style="color:#a3be8c;">+ - db
</span><span style="color:#a3be8c;">+
</span><span style="color:#a3be8c;">+ db:
</span><span style="color:#a3be8c;">+ image: postgres:9.6
</span><span style="color:#a3be8c;">+ restart: always
</span><span style="color:#a3be8c;">+ environment:
</span><span style="color:#a3be8c;">+ - POSTGRES_USER=gitea
</span><span style="color:#a3be8c;">+ - POSTGRES_PASSWORD=gitea
</span><span style="color:#a3be8c;">+ - POSTGRES_DB=gitea
</span><span style="color:#a3be8c;">+ networks:
</span><span style="color:#a3be8c;">+ - gitea
</span><span style="color:#a3be8c;">+ volumes:
</span><span style="color:#a3be8c;">+ - ./postgres:/var/lib/postgresql/data
</span></code></pre>
<blockquote>
<p>The following steps have to be modified if changes were made the lines
with different color, specifically converting <em>host</em>, <em>user</em>, <em>password</em>
and <em>DB</em>.</p>
</blockquote>
<ul>
<li>If the instance is not running already, start it (assuming all the other
configuration is done according to docs)</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">docker-compose</span><span> up</span><span style="color:#bf616a;"> -d
</span></code></pre>
<ul>
<li>Connect to <code>psql</code> inside a container and prompt postgres password from
above</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">docker-compose</span><span> run</span><span style="color:#bf616a;"> --rm</span><span> db psql</span><span style="color:#bf616a;"> -h</span><span> db</span><span style="color:#bf616a;"> -U</span><span> gitea gitea
</span></code></pre>
<p>The <a href="https://docs.docker.com/compose/reference/run/">command</a> could be a
little confusing, so here are placeholders</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">docker-compose</span><span> run</span><span style="color:#bf616a;"> --rm</span><span> SERVICE psql</span><span style="color:#bf616a;"> -h</span><span> HOST</span><span style="color:#bf616a;"> -U</span><span> USER DB
</span></code></pre>
<ul>
<li>Now you can use standard <code>psql</code> commands, use database <code>gitea</code></li>
</ul>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>\c gitea
</span></code></pre>
<p>Then for instance list all tables</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>\dt
</span></code></pre>
<ul>
<li>Look up repository id</li>
</ul>
<pre data-lang="sql" style="background-color:#2b303b;color:#c0c5ce;" class="language-sql "><code class="language-sql" data-lang="sql"><span style="color:#b48ead;">SELECT</span><span> id,name </span><span style="color:#b48ead;">FROM</span><span> repository </span><span style="color:#b48ead;">ORDER BY</span><span> name;
</span></code></pre>
<blockquote>
<p><strong>Disclaimer:</strong> following commands can lead to a LOSS OF DATA! Before
proceeding further, please make proper backup(s).</p>
</blockquote>
<ul>
<li>Remove all the watchers of the given repository, insert change ID to a
required value</li>
</ul>
<pre data-lang="sql" style="background-color:#2b303b;color:#c0c5ce;" class="language-sql "><code class="language-sql" data-lang="sql"><span style="color:#b48ead;">DELETE FROM</span><span> watch </span><span style="color:#b48ead;">WHERE</span><span> repo_id=ID;
</span></code></pre>
<p>Done!</p>
<p>This is a 14th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
I have published my first game2021-03-23T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/have-published-first-game/<p>Go play it at <a href="https://triangles.fun">https://triangles.fun</a></p>
<p>It's a 2D relaxing puzzle game, made with Svelte, TailwindCSS, PWA and NORD
theme.</p>
<h2 id="irl-inspiration">IRL inspiration</h2>
<p>The real word version of this games is called <strong>Jumpy!</strong> or something
similar and the picture I have found in my archive is below.</p>
<p><img src="https://peterbabic.dev/blog/have-published-first-game/triangles-fun-irl.png" alt="The Jupmy! game I took an inspiration from" /></p>
<h2 id="reception">Reception</h2>
<p>I have made some announcements around the Internet on Reddit, Hackernews,
itch.io and Svelte's Discord channel. I have made some Toots on
<a href="https://babic.dev">https://babic.dev</a> server.</p>
<p>Apart from Reddit, most other channels did not provide almost any
responses, but I did not expect any. Few people on the Reddit thread asked
for the code, so I have polished the repository and made it public here
<a href="https://github.com/peterbabic/triangles.fun">https://github.com/peterbabic/triangles.fun</a></p>
<p>Enjoy!</p>
<p>This is a 13th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Using Kanban board in Gitea2021-03-22T00:00:00+00:002023-08-07T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/using-kanban-board-gitea/<p><a href="https://www.kumospace.com/blog/kanban-board">Kanban board</a> is a tool to
manage workflow. In essence, the work is divided into columns, that
represent teams with their resources. The goal is to move tasks across
columns from one side to the other. When this happens with any single task,
it usually represents that the task is finished.</p>
<p>The catch is that all columns have a limited height, meaning there is a
limited number of tasks that can occupy given column, or in other words, a
given resource. For instance, if there are too many tasks in column
<em>started</em> but not a single one in column <em>testing</em>, a new task (for
instance new feature development) cannot be started, before at least one of
the tasks in <em>started</em> column does not move further. This process ensues
that continual progress is made.</p>
<p>Gitea supports Kanban board functionality since version 1.13.0. It was
implemented with the PR
<a href="https://github.com/go-gitea/gitea/pull/8346">#8346</a>. Although Kanban board
usually connects with DevOps and Agile development, it can be effectively
used to manage less esoteric terms, for instance a team of woodworkers or
myself. I myself could possible benefit greatly of such a board. Because in
reality, I do have gazillion projects in a column <em>started</em> or <em>bought a
domain</em>, but almost none in <em>project is finished</em>.</p>
<h2 id="first-impressions">First impressions</h2>
<p>I have tested the Kanban board functionality in Gitea, which at a time of
writing sits at the version 1.13.5. After a little bit of shy clicking
around I got more comfortable with the way it is implemented. I made a
board that made me proud. All the tasks nicely ordered in columns by their
category.</p>
<p>After doing some some screenshots I got embarrassed. Do not make the same
mistake as me, ordering tasks by grouping similar tasks. This is what tags
(or Labels as they are called in Gitea) are for. Kanban board columns is
specifically designed for stages the task is currently in.</p>
<p>If you are a little bit confused about all the used terms here, fear not,
you are not alone. In fact, there is quite a discrepancy around in most of
services developers use for this purpose. GitHub, GitLab, Trello and Gitea
all have different naming for the visual components of their Kanban board
implementation. Yeah, in Gitea it is called <em>Projects</em>.</p>
<p>User <a href="https://github.com/remram44">remram44</a> created quite detailed issue
<a href="https://github.com/go-gitea/gitea/issues/13802">#13802</a>. The proposal of
the issue is to some renaming for taking things more on par with the rest
of the established industry players.</p>
<p>I love Gitea and I even kind of start to like the what the board is
implemented there the more I use it, but I also believe that the naming
should be more consistent. Please Gitea do not get back into this Arduino
shields, Raspberry Pi hats, BeagleBone capes naming mess scenario again.</p>
<p>This is a 12th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Digital privacy as a new currency2021-03-21T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/digital-privacy-new-currency/<p>This morning I was a briefly
<a href="https://babic.dev/notice/A5QoI8kMcMWKKgPSgC">a part of the interesting conversation</a>
started at the Fediverse microblogging server
<a href="https://fosstodon.org/">fosstodon.org</a> by the user kip. The thread started
by the privacy oriented resolution kip made, stating that they are about to
make steps aimed at increasing their online privacy, particularly
<em>de-googling</em> (removing Google related software from their devices up to
the point of removing their user account entirely).</p>
<p>This is a bold statement that requires a huge amount of effort, especially
when using Google services for the daily tasks. If you have been following
closely, you might be aware of the fact that I am also on the track of
reducing my dependency on the Google services, or centralized third-party
services in general and replacing them with decentralized open-source
services where possible or convenient enough.</p>
<p>But my pace towards reaching this goal, if I will even achieve it in the
foreseeable future, is much slower. The reason for me is, that there is a
trade-off between privacy and convenience. Centralized services provided by
tech giants usually do the exact opposite - trade the privacy of their
users for the convenience. The more your phone "knows" about you, the
better it can handle questions with a context, for example, instructing the
smart device to <em>call the doctor</em> could be resolved properly, if the device
knows requires a particular diseases the user suffers from.</p>
<p>The problems could surface in different ways, for instance by device
advertising the medicine on the device that other people, like co-workers
could be exposed to. This specific scenario is just an example and the
privacy related problems could be lesser or worse. The problem I see is the
unpredictability of the services, or the fact that users can not know
beforehand what is happening with their data. The motives of the service
provider could be harmful but unforeseen consequences could be dire for
individuals.</p>
<h2 id="trackers">Trackers</h2>
<p>User <a href="https://fosstodon.org/@yyp">@yyp</a> suggested to use a less direct
approach than removing all the Google <em>related software</em>, by doing a
conscious effort and just blocking all the communication said software does
with the related service provider instead. In other words, stopping the
<em>trackers</em>. For this purpose, they suggested
<a href="https://f-droid.org/en/packages/net.kollnig.missioncontrol.fdroid/">TrackerControl</a>.</p>
<blockquote>
<p>Trackers in the context of the mobile phones are apps that monitor and
collect data about user behavior, in a process that is usually hidden and
ongoing.</p>
</blockquote>
<p>This solution made sense to me. What use the trackers are for the companies
they employ them, if they cannot send th data from my device back? The
solution also seems easier to employ. Doing de-googling the <em>right</em> way is
not just removing some apps and deleting the account, but replacing the
Android operating system on the phone by something not owned by a
pro-profit company, for instance by LineageOS. The reasoning behind this
might be that the user agreement allowing data collection might be forced
on the device user in order to even turn it on. With the actual account or
without it.</p>
<h2 id="consequences-of-using-trackercontrol">Consequences of using TrackerControl</h2>
<p>Installing the TrackerControl app was in fact much simpler than replacing
the whole operating system on the phone. I have installed it out of
curiosity. I have learned that it creates a local VPN to intercept the data
communication, limiting other apps access to the outside network. This
solution was novel to me and I do not understand all the implications yet.</p>
<p>I have assumed that TrackerControl would ask me every time an app would
like an access, making me to either allow or block it, simulating the
behavior of the firewall in interactive mode. I tried making some file
changes for Syncthing to pick them up, but nothing happened. The files were
not synced and no notification has appeared. What happened however, was
that it was in <em>deny all</em> mode by default. I was expected to enable all the
apps I trust manually.</p>
<p>Probing the discussion thread from the beginning revealed another
hindrance. In order to prevent all data from leaking the phone, it is
suggested to turn of the network before rebooting the device, as
TrackerControl is turned on later than Google services.</p>
<p>Yet I become a bit skeptical. I could turn off Wi-Fi network every time I
needed to reboot the phone, but turning off data access reliably requires
taking the SIM card out. With physical SIM cards it is still possible,
albeit quite unpractical. With the industry shifting towards eSIM, this
trick might become harder to pull off in the future.</p>
<p>Another possible drawback of the TrackerControl app could be increased
battery usage. I did not come across any data yet, so this assumption needs
confirmation, but it is something to consider.</p>
<h2 id="is-privacy-so-important">Is privacy so important?</h2>
<p>I should be asking this question every time I want to increase convenience.
I believe that increasing privacy in online space is harder, than simply
maintaining it, by the same logic as keeping the body weight is easier than
changing it in a desired way. To change something requires effort, to keep
something might just require a habit. A habit, properly developed, might
feel effortless.</p>
<p>Yet it is not easy to spot these convenience for privacy exchanges in real
life. Solutions like TrackerControl are more like a patches than a
full-scale solution. Although I believe they can work well enough for the
purpose they are advertised at, the solution requires a behavior change in
individuals as well as in corporations.</p>
<p>Could we come to the future where individuals with more privacy would be
living significantly better life than the ones possessing less of it? With
the fact that we are already trading the privacy for something else, could
digital privacy become a form of <em>currency</em>?</p>
<p>This is a 11th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Comparing my domain registrars2021-03-20T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/comparing-domain-registrars/<p>Over my life as a mostly hobby web developer I had used multiple domain
registrars. I did not look at this as a problem. Domains are usually paid
once a year and sometimes even less often, and many can be turned on auto
renew, so it is mostly set and forget.</p>
<p>The problems start to appear when the accounting comes into play. The
finances are side are simpler to manage if they come as a one invoice for
the package of services. This usually has a drawback - many service
providers are good and their core thing and add other services on top of
that, usually at a premium.</p>
<p>So while getting most services, like domain registration, hosting and email
from the single provider is easier on accounting, it is usually more
expensive than having separate providers for each respective service. There
is also manageability at play. Keeping track of different services across
different user interfaces and different support channels also usually takes
more time, than having it "under one roof".</p>
<p>So having <em>the same</em> service (domain registration) split further across
multiple providers takes even more effort to manage. When an email
forwarding comes into play, it can become a complete disaster, because
today it almost makes no sense to have a web application without any kind
of email contact available, taking out contact forms (I really hate them
but it is a different story). The email forwarding can belong among the
first steps during web app idea validation and unless using just another
separate provider like <a href="https://improvmx.com/">ImprovMX</a> for this purpose,
the domain registrar is usually a first choice, because most of them
forward emails for you for your domain registered by them for free.</p>
<p>Having multiple domain registrars usually boils down just to the question
of price. There is a variation among Top Level Domains (TLD) among many
registrars available on the market and it is possible to save a few bucks
by fishing for the cheapest one. This is usually done by registrars to
attract customers and then upsell other services to them.</p>
<p>As I have already noted, I believe currently it is better to have a single
provider for a service it does the best. By this reasoning I have decided
to transfer all my domains under one registrar to keep consistent with this
strategy. Here are four registrars I have used recently and my subjective
opinion on them.</p>
<h2 id="websupport-sk"><a href="https://websupport.sk">Websupport.sk</a></h2>
<p>My oldest partner with the web services, I was using it since I was
literally a child. Their core service is web hosting and they provide a
comprehensible amount of services comparable with other major web hosting
providers on the market, domain registration included.</p>
<p>Before I understood how it all works, it did not occur to me that web
hosting could be detached from the domain and subsequently, that the email
service could be detached from the domain as well, so naturally I have
ordered all here.</p>
<p>Their support is top quality, I had every case resolved or even technically
explained over times in a matter of minutes. The domain registration prices
are little bit higher and also the amount of available TLDs is quite
limited. The unavailability of some domains was precisely what made me
started looking elsewhere.</p>
<h2 id="namesilo-com"><a href="https://namesilo.com">Namesilo.com</a></h2>
<p>Namesilo is a domain registrar with great prices. It is trusted and
recommended by many and it is not that mainstream. Their UI could be
better, it is quite old. It does not mean that it is buggy or anything, but
it definitely feels unloved. I only had a single domain registered by them
and everything worked flawlessly. I had no need to use any support channel,
so I cannot comment on it. Worth checking out when aiming for price and
stability.</p>
<h2 id="namecheap-com"><a href="https://namecheap.com">Namecheap.com</a></h2>
<p>A mainstream player in domain registration business. Probably does not need
to be described too much. I had a bunch of domains registered here. They
provide most TLDs that are available to buy and the price is very
reasonable. Their UI is quite modern, but at the same time quite heavy and
sometimes a little sluggish. I did not need any support channel here
either, so again, cannot comment (this might be a good flag, you know).
Some people complain about the sheer amount of upsells offered by
Namecheap, so this but might be a little discouraging.</p>
<h2 id="porkbun-com"><a href="https://porkbun.com">Porkbun.com</a></h2>
<p>This is the winner of this match, I am transferring everything here. It
took a few months of testing, but I have found they offer everything I
needed. The support by chat or by email is great. I had an issue with the
SMS gate they were using. They are located in Canada and had to allow to
send SMS to Slovakia. Maybe I was their first customer from Slovakia that
decided to verify the phone number. It got resolved however.</p>
<p>The UI is minimalistic and clean. The prices are so good that there is
almost no point in fishing. I hope they won't undercut themselves out of
the business, though. AL the imaginable TLDs I looked for are available
here. They offer a great knowledge base pages with advanced topics, such as
setting up ANAME (sometimes called ALIAS), which is also a feature I plan
to use. There is a clean switch for a domain lock (preventing an
unauthorized transfer). I had an experience where I had to ask the support
for an unlock with a different provider, which felt quite protective. Most
or all of the advantages mentioned in this paragraph are also offered by
Namecheap, so keep that it in mind.</p>
<p>On top of that, they offer an API. <del>I did not had an opportunity to use
that yet</del>, but it definitely feels good from a developer's standpoint. It
kind of sends the message that "we know what you need and we are here for
you". Hopefully my life will be a little bit easier after all the transfers
to Porkbun, so wish me luck.</p>
<p><strong>Update:</strong> I have written bits of my experience using Porkbun API in
another <a href="/blog/wildcard-certificate-acme-sh/">post</a>.</p>
<p>This is a 10th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Netlify email forwarding problem2021-03-19T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/netlify-mail-forwardning-problem/<p>I have created a static page application and hosted in on Netlify. I did it
mostly for fun and a learning experience. I was poking around and tinkering
it, doing some bugfixing and polishing. The day of the release is coming
near and I have started thinking about adding a contact into it for people
to reach me.</p>
<p>Netlify does not host it's own email service. It is perfectly alright not
to do exactly what a lot of other people are already doing, unless you can
improve it significantly. Since email is a delicate interwoven network at
best and a tangled mess at worst, I do not blame them in the slightest for
not doing so.</p>
<p>For this app I have bought a domain and my current registrar of choice
offers a service called mail forwarding. Mail forwarding does basically
allow me to set up a one-way email address. One way here means it allows
receiving emails (or rather relaying them to the proper mailbox), but does
not allow sending from a given address. There are exceptions but for most
use cases, this explanation should suffice.</p>
<p>Email forwarding should be enough, because I do not expect a huge traffic
there. People that come to see the app and play with it are even less
probable to send me a email about their experience.</p>
<p>Up until this point, I though there are no problems in my planned setup.
But when I started the preparation for the mail forwarding address, the
domain registrar UI made objections, that I have to switch back to their
DNS, not the one provided by Netlify.</p>
<p>Now I believe this is a common way to do things (deploy a static page and
ad a mail forwarding, until there is a need for something better). There is
even a
<a href="https://answers.netlify.com/t/support-guide-how-can-i-receive-emails-on-my-domain/178">question</a>
the Netlify's forum stating that this is a <em>common question</em>. Currently, I
do not understand how the user would be able to access the app, when do DNS
is pointed at the domain registrar. Hopefully the solution I come with
would be the right one, not overcomplicating things needlessly. The second
static app deployment would be easier. It is always hardest the first time.</p>
<h2 id="solution">Solution</h2>
<p><strong>Update:</strong> after a little bit of experimenting I have found out that the
solution to this problem is
<a href="https://docs.netlify.com/domains-https/custom-domains/configure-external-dns/#configure-an-apex-domain">thoroughly documented</a>
on the Netlify docs.</p>
<p>In short, instead of domain using Netlify's DNS directly, registrar's DNS
are used instead. Then a special type of DNS record is used to simulate
CNAME-style domain resolution. This special record is usually labeled as
ANAME record, CNAME flattening or from my short experience on the topic,
most commonly ALIAS record.</p>
<p>Setting an ALIAS record from the domain I wanted to use to the Netlify's
subdomain for the project, for example <code>eager-fermat-cdfd7a@netlify.app</code>
made it possible for me to use registrar's email forwarding service without
additional cost.</p>
<p>This is a 9th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Using mnemonics outside of my vim2021-03-18T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/mnemonics-outside-vim-setup/<p>Vim. Many swear by it, many hate it. It has a steep learning curve, but
once I got proficient, I really saw the benefits. As with everything muscle
memory related, it is now even harder for me to use a <em>normal</em> writing
mode, for instance on web sites. :wq</p>
<p>In vim I build a memory map of key presses needed to do the given task.
Some default key sequences are based on <em>mnemonics</em>. Mnemonics help
remember larger amounts of information based on short patterns. One of the
many common key sequences in vim is <code>diw</code>, which would <strong>d</strong>elete <strong>i</strong>nner
<strong>w</strong>ord. Since vim is very configurable, it's users are encouraged to
build their own key sequences for the task they face repeatedly.</p>
<p>What I do when creating a new combination is to also try to use mnemonics
as much as possible, usually by using the starting letters of the words
describing the task. Most default vim key sequences start with a verb. I
agree with this approach, because it feels more natural, examples:</p>
<blockquote>
<p><strong>delete</strong> inner word, <strong>goto</strong> type definition, <strong>list</strong> commands</p>
</blockquote>
<p>There are exceptions of course. Sometimes, usually when the key sequences I
integrate into vim is centered around another external command, the
sequence starts with a noun, and there are also commands that have no verbs
in them:</p>
<blockquote>
<p><strong>git</strong> chunk undo, <strong>npm</strong> run dev, <strong>quick fix</strong></p>
</blockquote>
<p>I have discovered a nice integration based on the principles above. It is
centered around the FuZzy Finder - <a href="https://github.com/junegunn/fzf">fzf</a>
and it's vim plugin, <a href="https://github.com/junegunn/fzf.vim">fzf-vim</a>. When
installed in vim, I insert this line into my <code>.vimrc</code> file:</p>
<pre data-lang="vim" style="background-color:#2b303b;color:#c0c5ce;" class="language-vim "><code class="language-vim" data-lang="vim"><span style="color:#96b5b4;">nmap </span><span><silent> gf :<C-u>Files<CR>
</span></code></pre>
<blockquote>
<p><strong>Disclaimer:</strong> I am not a vimscript specialist, so there might be better
approaches. Use only when you know what you are doing.</p>
</blockquote>
<p>What the vimscript line above does is that it opens a dialog in vim, where
I can choose file in the current directory tree by matching it's name
fuzzily (meaning <em>not exactly</em>). For me, it is one of the more effective
ways of navigating the project.</p>
<p>Now what I do next is to insert this line in my <code>.zshrc</code> file:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#96b5b4;">alias </span><span style="color:#8fa1b3;">gf</span><span>='</span><span style="color:#a3be8c;">nvim `fzf -i`</span><span>'
</span></code></pre>
<p>Together, these two lines create a mnemonic <code>gf</code> that I remember as
<strong>g</strong>oto <strong>f</strong>ile, that uses fuzzy filename matching and is ingrained in my
muscle memory. No matter if I am in the terminal or already inside vim,
typing <code>gf</code> opens the file I need very quickly. A very powerful combo!</p>
<p>This is a 8th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Syncthing can sync my entire phone2021-03-17T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/syncthing-can-sync-entire-phone/<p>Syncthing has totally got me as a software. Every day I find new ways it
helps increase my productivity. It started with a synchronized folder with
password files made by KeePassXC between
<a href="/blog/sync-keepass-passwords-between-computer-phone/">my phone and the notebook</a>.
Then I went on by synchronizing my photos folder. Apart from the
possibility to not use Google Photos for this purpose, it helped me to
finally kickstart my habit of
<a href="/blog/syncthing-helps-selling-used-stuff/">selling stuff</a> I do not need.</p>
<p>This setup worked for me for some time, until I have realized it is a good
idea to have some other always-on device that makes sure that I do not lose
my data with during an accident. So I went on and
<a href="/blog/install-syncthing-archlinux-arm/">set-up Raspberry Pi</a> with file
versioning enabled to harden the whole thing. It is worth noting, that my
current settings required every single synchronized folder to be manually
accepted on the Pi. This Pi is used for other backup tasks, so I can
definitely say it was worth the effort.</p>
<p>I few days ago, when doing phone factory reset I
<a href="/blog/lockdown-travel-sms-sync-phone-reset/">lost my SMS</a>. I decided this
should not happen to me again. So I wen on and added another sync folder
for the SMS files.</p>
<p>My static site generation (SSG) for this blog does not have static assets
entirely solved out yet, but it is
<a href="/blog/svelte-kit-almost-beta/">getting there</a>. Until then I decided to
insert at least some pictures into the blog posts. Writing the post
yesterday I needed to insert the screenshot into the post. Another sync
folder. By now, it might becoming obvious where the I am heading.</p>
<h2 id="synchronizing-markdown-files">Synchronizing Markdown files</h2>
<p>The breaking point was today, when I wanted to write down some stuff to buy
I did not want to forget. Previously, I used Google Keep for this purpose
for mainly for two reasons. Firstly, I could access is swiftly either from
the computer (writing) or from the phone, when on the go (writing) or when
in the actual shop (reading, marking off). Secondly, the Keep notes are a
collaborative document in a real time, meaning my girlfriend could do
everything on the shopping list I could do. Very convenient.</p>
<p>But as she now has iPhone again and we are both trying to reduce our
dependence on the Google services, this option has become less accessible.
Not to mention I did not have it installed since the factory reset as I
have realized today.</p>
<p>This made me start looking for a solution on the F-Droid that could become
the replacement and found
<a href="https://f-droid.org/en/packages/net.gsantner.markor/">Markor</a>, a Markdown
editor that stores the files locally and even has the compatibility with
Syncthing mentioned in it's description.</p>
<p>While setting up just another folder and clicking <em>Accept</em> on both the
computer and the Pi I the idea struck me. An epiphany. Why not sync the
<em>entire</em> phone's internal storage? It took me just a few minutes to test
that.</p>
<p>As soon as I could confirm that it is in fact possible, I felt relieved.
Now any app that stores the files locally in the phone is my friend. I do
not need to do any additional setup for the new folders. I am sure I will
find more unexpected positive scenarios stemming up from this configuration
in the future.</p>
<p>This is a 7th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Most useful keyboards for Android2021-03-16T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/most-useful-keyboards-android/<p><img src="https://peterbabic.dev/blog/most-useful-keyboards-android/foss-keyboards-android.png" alt="Tested open source keyboards available on F-Droid" /></p>
<p>I do use my phone daily, and although I try to limit the time I spend using
it to the most productive minimum, the virtual keyboard is the core
experience. It would be very hard to use the phone without it. Being so
important, I decided to try some keyboards available on F-Droid. Here's the
list of keyboards I have tested with some highly opinionated remarks.</p>
<h2 id="simple-keyboard">Simple Keyboard</h2>
<p>As the name suggests, Simple Keyboard is really simple. It's description
specifically states the keyboard currently does not and probably never will
have features such as emojis, GIFs, spellcheck or swipe typing. I do use
emojis when chatting, so this one did not work for me.</p>
<h2 id="hacker-s-keyboard">Hacker's Keyboard</h2>
<p>Hacker's keyboard is the keyboard that matches many other Free and Open
Source Software (FOSS) apps produced by the community. What usually happens
when most features people request or contribute are pulled in, is that the
app accumulates many settings to make use of the many features, especially
when the features compete with each other.</p>
<p>The keyboard claims be well suited for SSH access because of the working
Tab/Ctrl/Esc keys, but I am not there yet. Unfortunately, the keyboard was
not updated for approximately two years. While it might be feature
complete, for many people including me, it feels safer investing time
learning software that is actively maintained. Definitely worth a try,
however.</p>
<h2 id="openboard">OpenBoard</h2>
<p>OpenBoard is based on Android Open Source Project (AOSP). It is similar to
the default keyboard for Android, Google Board or Gboard for short, but
without all the Big Data. Some users find it at ~45 MB quite
space-consuming. I believe that the size affects load time. The faster the
keyboard loads, the more pleasant the typing experience on the phone can
be. There are definitely some more lightweight solutions available if the
size or load speed is prioritized. But for a regular privacy-oriented FOSS
keyboard, this would be my choice. However, even though it has the ability
to support Slovak language, there are no Slovak words in suggestions, which
is strange.</p>
<h2 id="florisboard">FlorisBoard</h2>
<p>FlorisBoard aims to be aesthetically pleasing, modern keyboard focusing on
privacy. It is currently in alpha stage, so it still has some rough edges,
but it already has a ton of features implemented, for instance adapting the
theme based on the app in which it is used.</p>
<p>Among the features I have tested is a prioritization of the hinted symbols
(long press on the letter). My mother tongue is Slovak and I would love if
I could choose the order of offered accented characters, especially in
English language setting. Most of the keyboards I have tried order the
accented characters in sub-optimal way for writing Slovak words.</p>
<p>Unfortunately, these settings in FlorisBoard, while a step in a right
direction, did not provide granular enough settings for me. What is worse,
Slovak is not among supported languages on this board yet, so some
characters are not available at all. But I would like to get rid of
changing languages entirely, I find the need for doing so on either virtual
or physical keyboard kind of a hindrance anyway.</p>
<p>I am also a heavy user of KeePassDX and keyboard it provides (it is called
Magikeyboard) has the button to switch back to the regular keyboard with a
simple touch (it also offers to switch back automatically in some cases).
Yet with most regular keyboards, changing between keyboards is time
consuming. Usually, it requires a long press on space bar and then choosing
the keyboard from the list.</p>
<p>With FlorisBoard it is possible to switch to the last keyboard via swipe,
which makes switching almost instant. It looks like it is the only keyboard
I have tested that provides this feature, which is a very big plus. I am
definitely keeping an eye on it. When some bugs are fixed and the
suggestions bar with Slovak words become available, it could become my
daily driver.</p>
<h2 id="8vim">8vim</h2>
<p>8vim introduces a rather radical way of writing on the phone. It is based
on 8pen and it definitely accumulated some core fans over time. The only
thing I have found 8vim has common with vim editor, apart from the name
similarity, is the steep learning curve. While I found the experience it
provide quite novel, and even enjoyed it to some extent, I did not find the
keyboard stable enough to be my daily driver. It also lacks languages or
accented characters for that matter. I believe, given some more polishing,
it could mature into a very useful tool in the future, but I am not
adapting to it just yet.</p>
<h2 id="tessercube">TesserCube</h2>
<p>TesserCube is quite an outlier here, because it's main purpose is a OpenPGP
encrypted communication between two users that both use TesserCube. I find
the idea fascinating, as it, at least in theory, makes possible for two
people to communicate via unencrypted medium without 3rd party
understanding the communication. I did not test it and I believe there
still would be some metadata leaking obviously, but it is a nice approach
anyway. The keyboard it provides is very similar, or maybe even identical
(or based on) Simple Keyboard, but with the added encryption. I would love
to see it working in real time.</p>
<h2 id="conclusion">Conclusion</h2>
<p>So far, I am sticking with Gboard, as it has by far the best multi-lingual
support. It is not the outcome I have hoped for but since I am using stock
Android and Google for most my searches anyway, ditching their keyboard
won't make a big change in the way they gather data.</p>
<p>OpenBoard feels the very stable, claims to respect my privacy and has the
features I am currently used to, but the unfinished Slovak support is
discouraging. It could probably be fixed. FlorisBoard also seems like a
promising choice for the future, due to it's unique features, especially
the synergy with KeePassDX's Magikeyboard.</p>
<p>Note that AnySoft keyboard did not make it to the list because I could not
find the way to install it from F-Droid, but many swear by it, so it
deserves a mention. Links to the tested keyboards:</p>
<ol>
<li><a href="https://f-droid.org/en/packages/rkr.simplekeyboard.inputmethod">Simple Keyboard</a></li>
<li><a href="https://f-droid.org/en/packages/org.pocketworkstation.pckeyboard">Hacker's Keyboard</a></li>
<li><a href="https://f-droid.org/en/packages/org.dslul.openboard.inputmethod.latin">OpenBoard</a></li>
<li><a href="https://f-droid.org/en/packages/dev.patrickgold.florisboard/">FlorisBoard</a></li>
<li><a href="https://f-droid.org/en/packages/inc.flide.vi8/">8vim</a></li>
<li><a href="https://f-droid.org/en/packages/com.dimension.tessercube">TesserCube</a></li>
</ol>
<p>This is a 6th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
SvelteKit is is almost beta2021-03-15T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/svelte-kit-almost-beta/<p>Now that SvelteKit repository got unwillingly
<a href="https://www.reddit.com/r/sveltejs/comments/m337r7/sveltekit_repository_is_now_public_on_github/gqmvj9k/">turned public</a>
in an attempt to increase the available GitHub Actions supply for the
project, we all now have a chance to peek at the some of the discussions
the dev team currently has or or has had in the recent past.</p>
<p>I was among the many that took the habit of refreshing it's
<a href="https://www.npmjs.com/package/@sveltejs/kit">npm page</a> in a hope to see
some promising release related news during the long winter months. If you
did not paid attention to the Svelte scene during that time, here is a
short recap.</p>
<p>Rich Harris, the lead Svelte maintainer
<a href="https://svelte.dev/blog/whats-the-deal-with-sveltekit/">announced</a> that
Svelte will morph, so the developers would be able to enjoy more unified
ecosystem. The bulk of the development since then was done on the top of
<a href="https://www.snowpack.dev/">snowpack</a> (a faster frontend build tool). But
it has hit multiple roadblocks that were hard to overcome, so a few weeks
ago, the team switched over to the <a href="https://vitejs.dev/">vite</a> (next
generation frontend tooling) and the project is moving rapidly since.
Here's my observations:</p>
<h2 id="the-ugly">The Ugly</h2>
<p>Service workers. Work in progress. I was not able to understand what is
going on at all. There is a
<a href="https://github.com/sveltejs/kit/issues/10">issue topic</a> and it is already
closed by the PR, so there should be some official way, but I did not dig
deep enough to uncover it. For now, I believe it is easier to wait for the
documentation to catch up.</p>
<h2 id="the-bad">The Bad</h2>
<p>My project I was running these experiments on had it's tests written in
Cypress. SvelteKit does not yet ship with the testing framework
recommendations (hey, it really does not ship at all yet). While it was
easy, and even encouraged to use Cypress on a Svelte related projects, it
has not been so simple with the current build of SvelteKit.</p>
<p>The reason is that SvelteKit now ships with <code>"type": "module"</code> in it's
<code>package.json</code> by default. It has various consequences, the biggest is that
the code can now <code>import</code> modules directly, rather than <code>require</code> them.
This is not something what Cypress expects in it's configuration and so it
complains.
<a href="https://github.com/cypress-io/cypress/issues/8090#issuecomment-722431756">Workarounds</a>
exist but SvelteKit and Cypress are currently simply incompatible out of
the box.</p>
<h2 id="the-good">The Good</h2>
<p>Even though it is not officially released, we can already enjoy some fruits
of the hard work that went into it. The most obvious for me is the speed.
The start is almost instant and the rebuilt times are blazing.</p>
<p>Another thing that I enjoy are the so called
<a href="https://github.com/svelte-add/svelte-adders">adders</a>. With adders, you can
have Tailwind CSS inside your SvelteKit project in less than two minutes:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">npm</span><span> init svelte@next
</span><span style="color:#bf616a;">npx</span><span> apply svelte-add/postcss
</span><span style="color:#bf616a;">npx</span><span> apply svelte-add/tailwindcss
</span></code></pre>
<p>All my previous attempts to combine Svelte and Tailwind had some serious
issues. The closest I got was a setup where Tailwind, PostCSS with purge
and IntelliSense in vim worked, the HMR did not. The refresh happened on
file save, but the larger the project, the longer I had to wait for a
manual refresh to get the updates on the screen. Now we can see that it
might become really easy to do some nice integration combos with the adders
functionality. Great work!</p>
<p>This is a 5th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Two Gitea clients for mobile2021-03-14T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/two-gitea-clients-mobile/<p>I am self-hosting a Gitea instance for all my coding work. Live personal
projects, coding work, archived code. In fact, not only coding but most
writing in general. Books and courses included. Yeah, these days, it is
possible to <em>write</em> the course as well.</p>
<p>Browsing F-Droid lately I have stumbled upon the
<a href="https://github.com/git-touch/git-touch">git-touch</a> project. It caught my
attention because it is a client for Gitea as well. Up until this point, I
was only aware of the <a href="https://codeberg.org/gitnex/GitNex">GitNex</a>, which I
like a lot. It feels lightweight but at the same time it has many features
I care about.</p>
<p>I do not host my repositories on GitHub for various reasons, but I still do
use the account there. Mostly for filling issues, but sometimes for
occasional contributions and also for watching releases. For these reasons,
I was also keeping my official GitHub mobile app in the phone as well. But
git-touch might aim to be one-stop shop mobile app for most code hosting
solutions. So not only Gitea and GitHub but GitLab, BitBucket, gogs and
also gitee (I was not aware of the latter, but apparently serves customers
in Asia).</p>
<p>After installing git-touch, I was pleasantly surprised. I must admit it
made me feel like it is lighting fast! But the impression did not last
long. I have found that it lacks too many features for Gitea account to be
useful. For instance, as of writing, the release
<a href="https://github.com/git-touch/git-touch/releases/tag/v1.12.3">v1.12.3</a>
lacks a repository search function in Gitea account. I had to scroll the
repository in question to see the details. Also, some repositories could
not load the code, even a tiny ones.</p>
<p>To be fair, git-touch UI for GitHub account offered basically everything I
expected. Compared to Gitea, it had a toolbar on the bottom with issues,
search and many other tools. I could definitely imagine using git-touch for
my, rather limited GitHub work.</p>
<p>As a side note, the maintainer of the git-touch, user
<a href="https://github.com/pd4d10">pd4d10</a> is a member of the organization
<a href="https://github.com/bytedance">ByteDance</a>. The organization is rather
famous, or infamous, depending on who you ask. It thus can be a positive or
a negative flag for you, so consider using your own opinion on this matter.</p>
<h2 id="conclusion">Conclusion</h2>
<p>A very quick glance at git-touch mobile app, using it with Gitea and GitHub
account made it clear to me, that it is not yet ready for my Gitea workflow
yet. For now, I am sticking with the proven GitNex to be able to quickly
access my repositories on the go. I will keep an eye on the git-touch, as
it already has the required features I find useful for the GitHub account,
so they can appear on the Gitea side of the app soon, depending on the
roadmap, which I did not check yet. At that time, it could become a useful
tool.</p>
<p>This is a 4th post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Negative margin and grid layout in CSS2021-03-13T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/negative-margin-grid-layout-css/<p>Some time ago, if developers wanted to layout the page, they generally had
two options. Either use some hacky solution made of many floating elements
or use the HTML <code><table></code> element. Both approaches had their fair share of
problems. These times are fortunately gone and today we can use modern
tools designed specifically for the job.</p>
<p>One such tool is an ability to use grid layout in CSS. With grid layout
supported in all modern browsers, there is little to no reason to use the
table element for building the layout anymore. The element still has it's
use - to display tabular data. It is especially important to use the right
element for the purpose it was designed in a semantic web, but this is not
the topic of this post.</p>
<p>While designing web these days is far more pleasant experience than it was
let's say 15 years ago, and new better solutions are available each passing
day, web is also evolving rapidly bringing new problems in.</p>
<p>Lately I was building a small game I had in mind. I had to use so called
<code>hexagonal circle packing</code> but with a gap between the circles for a more
pleasant look. I have tested multiple approaches and the one that worked
the best for the constrains of the game and the code I had written was
surprisingly grid layout.</p>
<p>It was a surprise for me, because the grid layout is the core of the
different kind of circle packing - square packing. Square packing arranges
all the circles in rows and columns and grid layout shines as a CSS
solution for this.</p>
<p>In a hexagonal circle packing, the circles are packed more tightly
together. If you tried to cut more circular cookies from one sheet of
pastry, with hexagonal packing you would had less waste than with square
packing. I was not trying to solve the problem of minimizing the waste, so
why was I needed hexagonal circle packing?</p>
<p>It turns out that it is useful for displaying triangles made out of circles
as well and this is the core of the game. It is possible to use other
solutions but none tested were good enough.</p>
<p>I have learned that it is possible to use negative margins to move the
elements around, even in the grid layout. You can see the outcome below:</p>
<p><img src="https://peterbabic.dev/blog/negative-margin-grid-layout-css/hexagonal-circle-packing-triangles.png" alt="Hexagonal circle packing with gaps allows to build a triangle" /></p>
<p>I had added red hexagon to illustrate where this packing got is name, but
it should be clear nevertheless. All the circles except the circle 10 are
shifted to the left with the use of negative margin.</p>
<pre data-lang="css" style="background-color:#2b303b;color:#c0c5ce;" class="language-css "><code class="language-css" data-lang="css"><span style="color:#8fa1b3;">.</span><span style="color:#d08770;">circle1 </span><span>{
</span><span> margin-left: </span><span style="color:#d08770;">-60px</span><span>;
</span><span>}
</span></code></pre>
<p>Because of the reasons I did not understood, the same could not apply for
shifting to the right - this did not do anything:</p>
<pre data-lang="css" style="background-color:#2b303b;color:#c0c5ce;" class="language-css "><code class="language-css" data-lang="css"><span style="color:#8fa1b3;">.</span><span style="color:#d08770;">circle1 </span><span>{
</span><span> margin-right: </span><span style="color:#d08770;">-60px</span><span>;
</span><span>}
</span></code></pre>
<p>If you know why it might be the case, please let me know.</p>
<p>This is a 3rd post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
A tale about organisational openness2021-03-12T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/tale-about-organisational-openness/<p>The meeting with a client this morning was postponed for about an hour. My
job there was to find a way how to adjust a small single-purpose machine
that was used to continuously check the quality of a manufacturing process.
Every four hours a batch of five devices under test were inserted into the
machine by a dedicated operator and the quality of the manufacturing
process was assessed.</p>
<p>At first glance, there seemed to be no problems with the machine or the
process whatsoever. The operator came with a five freshly made pieces. She
inserted the first one into the machine and fiddled with the piece, until
the machine turned the red indicator into a green one, signaling that the
first device was conforming to the standards. The big red flag. Hidden
behind the nice, comforting green glow.</p>
<p>The operator then proceeded to write the value from the display into the
spreadsheet. Then she put the second piece into the machine. It took a
little longer, but the green light eventually came. Again, she proceeded
with the spreadsheet. This went on for all five devices.</p>
<h2 id="confirming-the-measurements">Confirming the measurements</h2>
<p>When she was finished, she left and I have turned to the client and asked
how can we find if the values measured by the machine are indeed inside the
allowed ranges. By that time, I was almost sure that if the machine was
generating random numbers instead of an actual precise measurements, the
final spreadsheet would look the same. We had to check the numbers.</p>
<p>To my even deeper surprise, the process that should confirm or reject the
data definitely seemed even more random. I was standing near a piece of
equipment that consisted of a very large magnifying glass with two thin
black lines crossing in its center combined with the manual controls and a
digital display that displayed the values. The values were the precise
distance the cross has traveled along the vertical and horizontal axis.</p>
<p>Operator handling this equipment pointed the cross on an arbitrary point on
the black silhouette, pressed one button and moved the cross to a
different, seemingly arbitrary point on that silhouette. He read the value,
and repeated the process on a different part of the silhouette. Then he
with the absolute certainty confirmed that the device dimensions were
conforming to the required ranges.</p>
<p>To be fair, I was informed beforehand that the equipment needed maintenance
and I should only take it as an illustration. The lightbulb that would
reveal the surface features on the given silhouette was broken and needed
replacement. An operator admitted he had chosen the points by heart. But
that meant we had no way of making sure that the original machine in
question was making the right measurements and decisions based on them.</p>
<h2 id="converting-pixels-to-millimeters">Converting pixels to millimeters</h2>
<p>Back at the original machine, the client was certain that the operator was
doing the work the same way for multiple years and there was no complains
from their customer. That only in a recent weeks it took more fiddling to
get the green light and if I could do something about it.</p>
<p>Since I was already there, I wanted to do my best. The bulk of the machine
was an inspection camera. I was not terribly familiar with its GUI, but I
knew some basics. Here, I could again see the black silhouette of the
device under test. This time the silhouette was expected however, as the
inspection cameras work precisely this way. The light is shone against the
device under test against the sensor and then the inspection tools are
applied on the result. The tools can then find where the pixel color
changes from white to black and act on it. Of course there are also
different ways the inspection cameras work, but this is among the core
principles.</p>
<p>The camera was doing to measurements on the device, as already pointed out.
The measurements the operator was presented was in millimeters. Yet the
camera only knows pixels. The way the camera calculates distance in
millimeters is to take the number of pixels and multiply it with a
coefficient. The coefficient is calculated beforehand, based on the camera
resolution and the distance of the object from the sensor. After a few
minutes of looking around the GUI I was able to find this coefficient. But
there was a catch. The coefficients were slightly different for both the
measurements, even though they were done in the same distance. Because the
coefficients did not match, I presented the possibility that one, or maybe
both coefficients were artificially adjusted to make values conform to the
required ranges.</p>
<p>In the end, the machine was still glowing red more often than they were
used to. Did the machine started measuring the devices differently or did
the devices change slightly? Without the other working equipment that could
provide the reliable measurements we could not be certain what was the
cause. But the fact that the coefficients were different made people think.
Was the machine really flawed? Was my thinking flawed? Or was there
something completely different at play. It was however too easy to adjust
the coefficient to make the measurements fall within the range more easily.
Anyone operating the machine could do it. I may never find out. Being a
potential contact worker I was not presented with all the details. But
hopefully it will spark more open discussion inside the organization, that
would revisit the correctness of the given process. If they decide to hide
the issue it could cause unnecessary problems in the future that could be
avoided.</p>
<p>This is a 2nd post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Lockdown travel, SMS sync and factory reset2021-03-11T00:00:00+00:002021-05-09T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/lockdown-travel-sms-sync-phone-reset/<p>Today was a long day. Lockdown restrictions in our country currently do not
allow traveling between counties. Some exceptions apply, but none of them
for me. It is only allowed to travel longer distances between 01:00 and
05:00 in the morning. I had to wake up at 00:45 to make the ride legally.
Such an early hour is a deer time and they made sure to be seen on the both
sides of the road. Majestic animals though.</p>
<p>A few hours of sleep in a different bed. It was comfortable, have not slept
here for a few months. No one did, the pillow was in the same position I
have arranged it during my last visit. The room had gotten a few new
inhabitants with a lots of legs and many eyes. I decided to let them be. It
took them far longer to build their nests than I will stay here. Preventing
unnecessary damage.</p>
<h2 id="morning">Morning</h2>
<p>Breakfast taught me that a contact grill can somehow spit molten butter at
my face, glad I wear glasses (except when a face mask diverts all the
moisture I breath out at them and they become completely hazy). I did not
have time to study the manual of that contraption. Bet it is still kept
inside that original package. Doubt manufacturer mentions this cool feature
there. Maybe it is written on the butter. I suppose I have been warned.</p>
<p>Plugging my travel router in proved that a signal strength denoted by a
single little bar out of four was in fact possible. Never seen it so low
before. Since it was still factory reset to OpenWRT, because I have
accidentally touched the reset button a few days before, it did not show
the signal on its LEDs. Deciding it is time to restore this functionality
to find a suitable place for this little companion, I have learned that I
had to keep the stock firmware image, because it is need for reverting back
out of OpenWRT. Panic hit me because I have deleted the file a week ago.</p>
<p>Downloaded what I suspected was the exact same firmware file I used before
I started the process of reverting back to the stock router firmware. My
heart was pounding. I could already see the router bricked. Why did I even
go down that route? Couldn't I just enable the script that made the signal
LEDs glow nicely as before, find a place where at least three of them were
happily emitting photons and move to the other tasks at hand? No, of course
not. I wanted to know, If I made the reversible mistake. I didn't.</p>
<h2 id="afternoon">Afternoon</h2>
<p>The router rebooted just fine and the stock firmware of course knew how to
handle the LEDs. After changing the default password I have
<a href="https://github.com/peterbabic/openwrt-mr200">documented</a> the newfound
knowledge, pushed the stock firmware into the repository so now it won't
wander into the trash bin or beyond and proactively built a custom OpenWRT
image with the LED handling script baked in. It will come handy again in
the future. I plan to switch to OpenWRT again, but having a WPS/reset
button doing factory resets when anyone only looks at it, it is not a good
idea. I have found out how to prevent this in a software, but then there is
no possibility to a factory reset at all. Imagine locking out by losing a
password (or a SSH key for that matter).</p>
<p>With the help of the LEDs the router found itself on a cozy place on the
shelf near the Bionicle-like dragon and a book by Antoine de Saint-Exupéry,
providing me with a stable connection. I could start doing other things.
Sharpen your tools daily.</p>
<p>Having a metered connection for almost a two weeks prior did not allow me
to for instance update the computer. A rolling release distribution that
starts with words <em>btw</em>, <em>I</em> and <em>use</em> it gave me an significant itch due
to not updating. I am used to do updates in a batch, so naturally I have
started updating all the other devices I rely on as well. Even though
updating remote devices over a metered connection is not incurring
additional charges, since they usually have their own bandwidth, I like to
start updates on the notebook, where I can catch some unexpected problems
before they arrive at more critical points in the network.</p>
<p>Relying on a metered connection for so long prevented me from resetting my
phone as well. I wanted to do it for some time, yet it is one of the things
that is easier delayed than executed. Apart from the fact that
re-downloading all the apps can be a significant data hog, I also had a lot
of podcasts downloaded in Spotify. Resetting the phone would get rid of
them as well. You can not just backup them in file somewhere, it is a
proprietary solution. Somehow, Spotify do not play ads when listening to
podcasts. It is still the simplest way for me to manage podcasts I listen
to, although I did not look at any other proprietary or open-source
solution yet, to be fair.</p>
<h2 id="evening">Evening</h2>
<p>Knowing I have my photos and passwords backed up reliably, I did not
consider I could lose important data with the factory reset. Mistake. Of
course it could not go without a single hiccup. Soon when the phone was
freshly reset, I realized I have lost all the SMS. There will be multiple
promised broken due to this. How could it happen? I made sure I can access
them with the browser using some Google Messages interface on the notebook.
I must admit I did not put much thought into it. My line of thinking were
like this: SMS were being handled by Google. They could be accessed from
multiple devices. Therefore, they stored in the cloud. Well, no. Google
makes sure to track my every movement, but it does not store the messages.
I even have contacts still stored with Google, it should be a no-brainer
for them. This was costly, amateur mistake. Won't happen again. When I was
looking for the messages, Google made sure to offer me some Google Fi along
with multiple price tags that would do <em>something</em> with my SMS messages.
No, thanks. Now you can go Fi yourself.</p>
<p>As a fix to this, I have found an
<a href="https://gitlab.com/axet/android-sms-gate">SMS Gate</a> on F-Droid which
backups SMS using SMTP/IMAP. It can also store SMS as files in the user
space storage. I have set it up as folder handled by Syncthing, so I won't
loose customer data this way in the future. The app claims I could even
respond to SMS via email, which definitely sounds promising. I did not test
this yet, but it is definitely being added to my TODO list. Restoring the
phone with the help of KeePassDX took most of the evening, but I got it
done. Tomorrow I will set my foot outside, I look forward to it.</p>
<p>This is a 1st post of <a href="https://100daystooffload.com">#100daystooffload</a>.</p>
Syncthing is helping me sell used stuff2021-02-20T00:00:00+00:002021-02-21T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/syncthing-helps-selling-used-stuff/<p>Since I have made the Syncthing working across my devices, synchronizing
KeePassXC passwords and photos, my life turned out better in more ways than
I have previously imagined.</p>
<p>One way that that this setup resulted in a positive change in my life is
minimalism. Minimalism is interpreted by different people differently,
depending on the context. It could for instance mean to obtain the same
result with less resources.</p>
<p>For a sysadmins it could mean having a lean system with as little packages
as possible. For a programmers, it could mean that the app has as little
dependencies as possible, some even strive for zero dependency but that is
different philosophy altogether.</p>
<p>For the ordinary people the minimalism usually translates to owning less
<em>stuff</em>. Less clothes. Less decorations. Less shoes. Less everything. Some
items can even be classified as junk, yet we still keep them around. Maybe
because in today's fast paced world we did not have time to clean up our
space properly. And maybe, we keep things that we do not use around,
because they have value and could be sold. Only if I had more time, I was
telling myself.</p>
<p>Yet the lack of time was not what was preventing me sell various
possessions of mine, that were of no use to me for more then a year, some
even longer. Not being able to sell them easily was not the reason for
hesitation either. There are various global services like Ebay and
Craigslists and a plethora of various localized ones, that let you sell
whatever you can think of. People also tend to create an emotional
attachment, or a bond with some material things, either because they had to
work hard to earn money to buy them or because they were part of some,
usually positive experiences before, maybe even repeatedly. Yet as I have
later found out, this was not the roadblock for me either.</p>
<h2 id="the-problem-was-elsewhere">The problem was elsewhere</h2>
<p>I had most of the things I wanted to get rid of tucked away neatly in a box
sorted and prepared. I knew that I want to sell or even donate them,
instead of throwing them away. I also like to buy used things, because it
helps keeping landfills emptier, whilst also providing a little price cut
in comparison to the new item.</p>
<p>A lot of the things in the box were tools. Tools that I did no longer use,
but they were still functioning. They were of really low value, but I knew
other people could still use them.</p>
<p>Other items there, for instance a digital camera or RAM were of good value.
I did not use the camera because, even though it had an HDMI output,
getting that output live in a computer without shattering required an USB
HDMI grabber - a device that was of similar price as the camera itself. The
RAM did not physically fit the laptop that I was currently using.</p>
<p>As such, the box with the stuff was just sitting there, mostly collecting a
layer of dust over the top. So what change in my behavior or environment
made it possible for me to disregard the obstacles and let me insert the
unused possessions into the web service for others to buy?</p>
<h2 id="the-value-of-physical-keyboard">The value of physical keyboard</h2>
<p>The moment I had synchronization of photos from my phone to my laptop, the
selling process started basically itself. I have put everything from the
box on the top of the table by the window and made some photos from
different angles.</p>
<p>Yeah, I could do it from the phone - there is an app for that, I know. But,
hell, I so much hate multi-tasking on the phone! Having to write
CT102464BF160B Crucial RAM 8GB DDR3 1600 MHz CL11 Laptop Memory and all the
other parameters that describe the piece.</p>
<p>The photo editing on the phone is also possible, but I am so used to mouse
and keyboard shortcuts for this kind of tasks. I have understood that
working with a computer and not a phone is so ingrained in me, that once I
had the possibility to solve the task my usual way, I could no longer
contain myself.</p>
<p>And you know what? Most of the 14 things I have put on sale sold almost
instantly! If you ever hesitate if you should set-up Syncthing your phone
and your laptop, I am saying it is definitely worth the time. Surely you
can come up with other different creative uses yourself.</p>
How to install Syncthing on Arch Linux ARM2021-02-16T00:00:00+00:002021-02-16T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/install-syncthing-archlinux-arm/<p>Syncthing is a solution to share files across multiple devices, spanning
most operating systems, including Linux, Windows, Mac and Android. It's
open source and it is decentralized. It requires some set-up, however. I
was reluctant at first, but it is one of the things that I did not know I
needed.</p>
<p>I have started using it to sync my <code>.kdbx</code> password database file between
my laptop and my phone and it proved to work reliably and near-instant. I
made a lot of research to choose the right password manager for my needs
and a possibility to store SSH keys in KeePassXC was a deal-maker for me.
It looks like it is quire hard to implement this such functionality in the
cloud-based solution. Yet people in the discussions mention Syncthing as a
to-go tool to synchronize another type of files: photos.</p>
<p>Thus logically, with everything already set-up to synchronize one database
file, I went on and added another folder that my phone uses to store fresh
photos of whatever I find interesting enough to capture. It did not took me
long time to became basically addicted to the fact that whatever I point my
phone camera at is available in the laptop in a real time.</p>
<h2 id="centralizing-decentralized">Centralizing decentralized</h2>
<p>The first drawback of this set-up that I decided to tackle was that if one
device if off-line for some time, it won't get the updates from the other
device. If that other device is lost in the meantime, the data are lost. In
reality, this would mean that if I lost my phone before my laptop gets
connectivity, the photos would be lost. The situation with passwords and
SSH keys is no different, yet the consequences could be even more dire,
depending on the situation.</p>
<p>The lack of the central server lowers the cost but in the meantime it
increases the risk that data would be lost, as we can see. Sow how does one
solve this problem? Well, the solution is simple, as you already suspect.
Take your favorite low-power ARM device, attach a reliable storage to it
and let it always on. If this scenario seems to familiar to you, it is.
Most people choose Raspberry Pi, that is usually just a drawer away, ready
to be used.</p>
<p>Setting up an always-on device device for this task brings best of both
worlds. I got centralized solution without the requirements that a
conventional centralized services pose, such as public IP and too-reliable
connectivity. Introducing a central node in a decentralized service removes
the single point of failure. My setup is working without any problems, even
when the Pi dies or loses connectivity for some time. It will eventually
get synchronized with the phone or the laptop, once brought back to working
state. It just makes sure there is almost always a point where data could
synchronize to preventing them being lost.</p>
<h2 id="preparation">Preparation</h2>
<p>As I have already noted, for this recipe we need a stale ARM computer,
headless. Next we need a storage media, a preferably a raw SSD or a HDD if
you are vegan. Sprinkle a little bit of bootable media in to the mix. Now
follow the instructions in the
<a href="https://archlinuxarm.org/platforms">cookbook</a>.</p>
<ol>
<li>The next step is to install the system and Syncthing itself, when logged
in as root:</li>
</ol>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pacman -Syu</span><span> syncthing
</span></code></pre>
<ol start="2">
<li>Mount the storage media somewhere into the prepared directory, example:</li>
</ol>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">mkdir</span><span> /mnt/storage
</span><span style="color:#bf616a;">mount</span><span> /dev/sdXY /mnt/storage
</span></code></pre>
<ol start="3">
<li>Make sure the storage media get automatically mounted on boot, example
using UUID:</li>
</ol>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pacman -S</span><span> arch-install-tools
</span><span style="color:#bf616a;">genfstab -U</span><span> /mnt/storage/ >> /etc/fstab
</span></code></pre>
<blockquote>
<p><strong>Note:</strong> You can remove the <code>arch-install-tools</code> package after this
step. If you edit an <code>fstab</code> file manually, you do not even need it for
the <code>genfstab</code> command in the first place.</p>
</blockquote>
<ol start="4">
<li>Create a system user <code>syncuser</code> without any login shell:</li>
</ol>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">useradd --system --user-group --create-home --shell</span><span> /usr/bin/nologin syncuser
</span></code></pre>
<ol start="5">
<li>Allow only <code>root</code> and <code>syncuser</code> users to
<a href="https://unix.stackexchange.com/questions/204641/automatically-mount-a-drive-using-etc-fstab-and-limiting-access-to-all-users-o">access storage media</a>:</li>
</ol>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">chown</span><span> root:syncuser /mnt/storage
</span><span style="color:#bf616a;">chmod</span><span> 750 /mnt/storage
</span></code></pre>
<blockquote>
<p><strong>Note:</strong> A separate system user greatly limits unauthorized data
manipulation in case some ransomware or a malicious user used SSH
key-pair to enter the device from my laptop.</p>
</blockquote>
<ol start="6">
<li>Enable and start Syncthing service:</li>
</ol>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">systemctl</span><span> enable syncthing@syncuser</span><span style="color:#bf616a;"> --now
</span></code></pre>
<ol start="7">
<li>Make Syncthing web
<a href="https://serverfault.com/questions/351046/how-to-run-command-as-user-who-has-usr-sbin-nologin-as-shell">GUI accessible over the network</a>:</li>
</ol>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">su -s</span><span> /bin/bash</span><span style="color:#bf616a;"> -c </span><span>'</span><span style="color:#a3be8c;">vi /home/syncuser/.config/syncthing/config.xml</span><span>' syncuser
</span></code></pre>
<p>Change the address
<a href="https://docs.syncthing.net/users/guilisten.html#the-gui-listen-address">from <strong>127.0.0.1</strong> to <strong>0.0.0.0</strong></a>:</p>
<pre data-lang="xml" style="background-color:#2b303b;color:#c0c5ce;" class="language-xml "><code class="language-xml" data-lang="xml"><span><</span><span style="color:#bf616a;">gui </span><span style="color:#d08770;">enabled</span><span>="</span><span style="color:#a3be8c;">true</span><span>" </span><span style="background-color:#bf616a;color:#2b303b;">...</span><span>>
</span><span> <</span><span style="color:#bf616a;">address</span><span>>0.0.0.0:8384</</span><span style="color:#bf616a;">address</span><span>>
</span><span> ...
</span><span></</span><span style="color:#bf616a;">gui</span><span>>
</span></code></pre>
<ol start="8">
<li>
<p><a href="https://wiki.archlinux.org/index.php/Security">Harden</a> your device
properly, or at least
<a href="https://wiki.archlinux.org/index.php/OpenSSH#Limit_root_login">restrict</a>
root login.</p>
</li>
<li>
<p>Set up the firewall - Syncthing package
<a href="https://github.com/archlinux/svntogit-community/blob/28d131d2c9d7324203804e3b698912cbee67aba3/trunk/PKGBUILD#L75">provides</a>
the
<a href="https://github.com/syncthing/syncthing/blob/e027175446e2bba3431bcd3095294531d68f35f8/etc/firewall-ufw/syncthing">UPnP and GUI definitions for <code>ufw</code></a></p>
<ul>
<li>this step is optional:</li>
</ul>
</li>
</ol>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pacman -S</span><span> ufw
</span><span style="color:#bf616a;">systemctl</span><span> enable ufw</span><span style="color:#bf616a;"> --now
</span><span style="color:#bf616a;">ufw</span><span> default deny
</span><span style="color:#bf616a;">ufw</span><span> allow syncthing
</span><span style="color:#bf616a;">ufw</span><span> allow syncthing-gui
</span><span style="color:#bf616a;">ufw</span><span> limit ssh
</span><span style="color:#bf616a;">ufw</span><span> enable
</span></code></pre>
<ol start="10">
<li>Done!</li>
</ol>
Keep Gnome Shell settings in dotfiles with yadm2021-01-23T00:00:00+00:002021-01-23T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/keep-gnome-shell-settings-dotfiles-yadm/<p>Gnome project went through a long and turbulent journey in a open-source
world, filled with forks, pivotal changes and controversy. Gnome is based
on GTK widget toolkit. Widget toolkit is a software library used to build
graphical user interface (GUI). GTK was born as a more open alternative to
Qt, which at a time had a proprietary license. Qt is another widget toolkit
powering the KDE desktop environment (DE). Desktop environment is a
collection of software running on top of an operating system sharing a
common GUI. Together, GTK and Qt are building blocks representing majority
in the Linux desktop environments. There are many more GTK based desktop
environments besides Gnome. Gnome is however by far the most used in the
wild.</p>
<p>Now that we have finish all the boring paperwork, let's get to the fun. Let
me just share a little from my experience with you. Over my life I have
tried many of the desktop environment, starting with the KDE on the now
discontinued Mandriva Linux back in 2001. It was very compact, because I
had it burned on the mini CD and I could carry it around wherever needed
(USB sticks were not used to boot Linux back then). Whenever there was a
computer somewhere, It was running Windows 98, ME, 2000 or XP. All of them
required the password to log in. Of course fathers would not share the
passwords with us, the children. But the files were not encrypted, so to
reach them one just needed to insert the CD, boot it and copy whatever was
needed. Our most precious trophies from this highly nefarious activity was
copying games. Obviously, you do not go over the lengthy process of copying
the game when you have only a limited time at the computer, when a father
permits it. You want to play when you can. Copying can be done when fathers
are still at work, having a password with them.</p>
<p>A funny thing was that the Mandriva booted from the CD had a k3b burning
software bundled in, but most computers had only a single optical drive. I
was 10 and I was really scared to take the CD out when a system was booted
from it, until I first tried. We really wanted to play that Medal of Honor
on the other computer. I was then really surprised that the system did not
crash when I took the mini CD out to insert a blank CD to burn the game on
it. Life is nice when you are a kid.</p>
<p>Fast forward today, I still love working on KDE occasionally, but currently
I do too use Gnome as my daily driver. Having a strong sense of automating
things, provisioning my system after a fresh install is one way how to
express this trait. One of the ways to store settings of your desktop
environment and programs running on it is to store so called dotfiles.
Dotfiles a human readable configuration files stored in the home directory.
The begin with the dot character, which makes them hidden, hence the name.</p>
<h2 id="dotfiles-manager">Dotfiles Manager</h2>
<p>As with almost anything software, there are multiple ways to solve a
problem. I feel like repeating this sentence too much, it holds true. In
fact, I believe it is much better to know multiple ways to solve a problem,
because it gives you the possibility to verify the validity of your
solution solution.</p>
<p>One popular solution of managing the dotfiles is to use a
<a href="https://www.atlassian.com/git/tutorials/dotfiles">bare repository</a>. It is
a very good solution many swear by, because it requires nothing else but
git and git knowledge to make it work.</p>
<p>In fact the solution I am going to get it is exactly the same bare
repository, but wrapped with a few enhancements. The package is called
simply Yet Another Dotfiles Manager, <a href="https://yadm.io/">yadm</a>.</p>
<p>YADM also requires only git knowledge to get the most out of it, just
replace the <code>git</code> command with <code>yadm</code>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">yadm</span><span> init
</span><span style="color:#bf616a;">yadm</span><span> add <important file>
</span><span style="color:#bf616a;">yadm</span><span> commit
</span></code></pre>
<p>Let's take a look how to handle Gnome configuration files with yadm.</p>
<h2 id="accessing-gnome-configuration">Accessing Gnome configuration</h2>
<p>Anyone who builds software that wants it's configuration files to be
modified by users makes sure the files are in the human readable format.
Sadly, due to some unfortunate circumstances, this is not the case for
Gnome dotfiles. They are not stored in the pre-defined location, ready to
be edited. You have to first export, or more precisely <em>dump</em> them, before
you can read and possibly store them with whatever technique you use to
store your dotfiles.</p>
<p>The word dump is used in the software world for a process of extracting all
the information from a single reservoir, usually some sort of memory
buffer. To make a dump of all Gnome settings, run command:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">dconf</span><span> dump / > gnome-settings.ini
</span></code></pre>
<p>I would argue that this is not dump in the original sense, because dconf is
smart enough to only return non-default, which means <em>modified</em> values. You
can also specify only a subset of the value, for instance
<code>/org/gnome/desktop</code> instead of just <code>/</code>. We are interested in storing all
the settings to easily load them on the freshly installed system, so
sticking with <code>/</code>, the <em>root</em>, is preferred. For a completeness, here's how
you apply the settings back into your system:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">dconf</span><span> load / < gnome-settings.ini
</span></code></pre>
<p>Naturally, to make yadm keep track of your settings, run the following:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">dconf</span><span> dump / > gnome-settings.ini
</span><span style="color:#bf616a;">yadm</span><span> add gnome-settings.ini
</span><span style="color:#bf616a;">yadm</span><span> commit
</span></code></pre>
<p>However, there is a problem with this approach. You have to do it before
every dotfiles update <em>manually</em>. At this point the advanced features of
yadm comes handy.</p>
<h2 id="git-hooks">Git hooks</h2>
<p>If you use git long enough, chances are you stumbled upon the feature
called <em>hooks</em>. Hooks are not exclusive of git, in fact the term is
commonly used to refer to an routine that is executed automatically when
some circumstances are met. Git defines two prefixes for hooks: <code>pre-*</code> and
<code>post-*</code> that execute before the given circumstance and after it,
respectively. They can be combined with some git command names to form a
self-explaining hook names like <code>pre-commit</code> or <code>post-merge</code>.</p>
<p>One useful application of git hooks is automation. Once you set a certain
hook up, you do not need to think about it. Our problem with Gnome settings
is that it requires three manual steps to get your current Gnome settings
into the dotfiles repository, where saved and accessible to other machines
for cloning.</p>
<p>Why would I need some dotfiles manager's advanced features, when the git
itself already provides this functionality, you might ask? Well it is
certainly possible to do it without yadm or any other wrapper. It looks
like your bare repository hook commands would require environmental
variable called <code>GIT_WORK_TREE</code>. I did not try it, precisely because I
could not find any good documentation on the search term
<code>git bare repo hook</code>. Feel free to explore or document this yourself.</p>
<h2 id="yadm-advanced-features">Yadm advanced features</h2>
<p>Yadm on the other hand has the <a href="https://yadm.io/docs/hooks">hooks</a> feature
as a first-class citizen. The yadm documentation differs from the git hooks
documentation in the delimiter. While git uses dash after the prefix, yadm
uses an underscore. To put this into perspective, in git you define
<code>pre-commit</code> hook, while in yadm you spell it as <code>pre_commit</code>. This a
caveat to keep in mind. If you find out it does not matter for either tool,
please let me know.</p>
<p>To automate Gnome settings commit process, create the file
<code>~/.config/yadm/hooks/pre_commit</code> and don't forget to make it executable:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#65737e;">#!/bin/bash
</span><span>
</span><span style="color:#bf616a;">dconfFile</span><span>="$</span><span style="color:#bf616a;">HOME</span><span style="color:#a3be8c;">/.config/dconf/settings.ini</span><span>"
</span><span>
</span><span style="color:#bf616a;">dconf</span><span> dump / > "$</span><span style="color:#bf616a;">dconfFile</span><span>"
</span><span style="color:#bf616a;">yadm</span><span> add "$</span><span style="color:#bf616a;">dconfFile</span><span>"
</span></code></pre>
<p>You can provide a different <code>dconfFile</code> location according to your personal
taste, but I like to keep it somewhere relevant where I do not see it. It
is not like editing this file directly will help you in any way - it will
get rewritten before every commit. Now change some Gnome settings manually,
for instance some keyboard shortcuts and make a yadm commit:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">yadm</span><span> commit
</span></code></pre>
<p>You will see the settings file in staged changes:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span># Changes to be committed:
</span><span># modified: .config/dconf/settings.ini
</span></code></pre>
<p>Nothing prevents you adding other files before committing, if you have
already added some, you can move all tracked into staged with</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">yadm</span><span> add</span><span style="color:#bf616a;"> -u
</span></code></pre>
<p>A nice trick is to add this command into our <code>pre_commit</code> file, this way
all the tracked files will staged automatically before every commit. This
way, you can even run commit as a cron job, with your sole responsibility
being adding or removing files into tracking! Feel free to steal these two
lines for yourself and put them into your
<code>~/.config/yadm/hooks/pre_commit</code>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#65737e;">#!/bin/bash
</span><span>
</span><span style="color:#bf616a;">dconf</span><span> dump / > "$</span><span style="color:#bf616a;">HOME</span><span style="color:#a3be8c;">/.config/dconf/settings.ini</span><span>"
</span><span style="color:#bf616a;">yadm</span><span> add</span><span style="color:#bf616a;"> -u
</span></code></pre>
<h2 id="provisioning-your-system">Provisioning your system</h2>
<p>Now that we have committing sorted out, the only thing left to do is making
sure that the setting will be loaded into the system when needed. For this,
we make use of another yadm advanced feature called <em>bootstrapping</em>.</p>
<p>Bootstrapping is a fancy name for another executable script. When yadm
clones your dotfiles into the fresh system and detects your bootstrap
script, it asks you if you want to run it. That's it. To meet our goal, it
has to load the Gnome settings back to the system during provisioning. The
script resides at <code>~/.config/yadm/bootstrap</code>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#65737e;">#!/bin/bash
</span><span>
</span><span style="color:#bf616a;">dconf</span><span> load / < "$</span><span style="color:#bf616a;">HOME</span><span style="color:#a3be8c;">/.config/dconf/settings.ini</span><span>"
</span></code></pre>
<p>You can also put other commands here to suit your needs. Have fun!</p>
Using arrays in Svelte localStorage store2021-01-14T00:00:00+00:002021-01-14T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/store-array-svelte-localstorage/<h2 id="difference-between-cookies-and-localstorage">Difference between cookies and localStorage</h2>
<p>A <code>localStorage</code> allows browsers to store data related to the application
in the client. This might seem identical to the purpose of the cookies, but
there are differences, otherwise it could just be called <code>otherCookies</code>
instead of <code>localStorage</code>.</p>
<p>First, the localStorage is expected to <strong>synchronize</strong> over multiple tabs
and windows, while in cookies, the behavior varies across browsers and is
generally not consistent. For inconsistencies I would blame the fact, that
cookies are an older technology and were designed with different goal in
mind.</p>
<p>Another difference is that cookies, if present, are attached to <strong>every</strong>
user request. For a best user experience, it is preferred to keep the
server-client data exchange at minimum. Although we live in a world where
the connection speeds are ever-increasing, rural and remote areas tend to
follow the suit at a much slower pace. Eliminating the unnecessary data
from the round-trip helps a lot. Sending around megabytes with cookies
every single request is definitely the way to go when building a slick app.</p>
<p>Cookies, unlike localStorage, have an <strong>expiry</strong> date. They are also
targeted to purge by multiple forms of cleaning or privacy software. In
such scenario, user could be happy by removing space on his device whilst
simultaneously tackling privacy concerns only to learn that the work he
done in your app is not saved anymore. Relying only on the localStorage to
save user's work data is not a way to go either, because it might get
purged when clearing browser cache.</p>
<p>Lastly, localStorage is not accessible directly on the <strong>server</strong>, only on
the client. Cookies are accessible on the both sides. The API to handle
either is thus also different. They are clearly meant for a different
tasks.</p>
<p>Here's a simple comparison of the text above:</p>
<table><thead><tr><th style="text-align: left">difference</th><th style="text-align: left">cookie</th><th style="text-align: left">localStorage</th></tr></thead><tbody>
<tr><td style="text-align: left">Expiration</td><td style="text-align: left">yes</td><td style="text-align: left">no</td></tr>
<tr><td style="text-align: left">Accessibility</td><td style="text-align: left">server and client</td><td style="text-align: left">client only</td></tr>
<tr><td style="text-align: left">Availability</td><td style="text-align: left">on every request</td><td style="text-align: left">manual access</td></tr>
<tr><td style="text-align: left">Tabs synchronization</td><td style="text-align: left">varies across browsers</td><td style="text-align: left">built-in</td></tr>
</tbody></table>
<h2 id="svelte-and-localstorage">Svelte and localStorage</h2>
<p>Having the need to have persistent reactive data in the Svelte app I have
decided to tackle the problem by starting off with an existing
<a href="https://svelte.dev/repl/329d9ab4b27543afaf735acfbc6bbec7?version=3.20.1">example</a>
from REPL.</p>
<p>In order to run the example you have to download it, as it would not run in
the sandboxed environment. I have did so, so you do not need to. The
example was not suitable for my application. It had two undesired features
that I had to fix. First, at the very first load (or after a localStorage
was cleared), the store contained <code>null</code> value. Secondly, it was not
working with <em>arrays</em>.</p>
<p>For a reference, the example code of the store itself looked like this:</p>
<pre data-lang="js" style="background-color:#2b303b;color:#c0c5ce;" class="language-js "><code class="language-js" data-lang="js"><span style="color:#b48ead;">import </span><span>{ </span><span style="color:#bf616a;">writable </span><span>} </span><span style="color:#b48ead;">from </span><span>"</span><span style="color:#a3be8c;">svelte/store</span><span>"
</span><span>
</span><span style="color:#b48ead;">const </span><span style="color:#bf616a;">storedTheme </span><span>= </span><span style="color:#bf616a;">localStorage</span><span>.</span><span style="color:#96b5b4;">getItem</span><span>("</span><span style="color:#a3be8c;">theme</span><span>")
</span><span style="color:#b48ead;">export const </span><span style="color:#bf616a;">theme </span><span>= </span><span style="color:#8fa1b3;">writable</span><span>(</span><span style="color:#bf616a;">storedTheme</span><span>)
</span><span style="color:#bf616a;">theme</span><span>.</span><span style="color:#8fa1b3;">subscribe</span><span>(</span><span style="color:#bf616a;">value </span><span style="color:#b48ead;">=> </span><span>{
</span><span> </span><span style="color:#bf616a;">localStorage</span><span>.</span><span style="color:#96b5b4;">setItem</span><span>("</span><span style="color:#a3be8c;">theme</span><span>", </span><span style="color:#bf616a;">value </span><span>=== "</span><span style="color:#a3be8c;">dark</span><span>" ? "</span><span style="color:#a3be8c;">dark</span><span>" : "</span><span style="color:#a3be8c;">light</span><span>")
</span><span>})
</span></code></pre>
<h2 id="first-load">First load</h2>
<p>It took me a while to understand why the code was getting null even tough
there is a default value ingrained in a form of a ternary operator:</p>
<pre data-lang="js" style="background-color:#2b303b;color:#c0c5ce;" class="language-js "><code class="language-js" data-lang="js"><span style="color:#bf616a;">value </span><span>=== "</span><span style="color:#a3be8c;">dark</span><span>" ? "</span><span style="color:#a3be8c;">dark</span><span>" : "</span><span style="color:#a3be8c;">light</span><span>"
</span></code></pre>
<p>It became clear to me that method only concerns <em>saving</em>, not <em>retrieving</em>.
When the localStorage is empty, the value retrieved is <code>null</code>, which is
then passed into to writable store. The store's <code>.subscribe</code> method is thus
called when we want to <code>.set</code> the value of the store, which too late in the
lifecycle. The change had to be made when initializing a writable store,
because it's first argument is where the store value initially comes from.</p>
<pre data-lang="js" style="background-color:#2b303b;color:#c0c5ce;" class="language-js "><code class="language-js" data-lang="js"><span style="color:#b48ead;">import </span><span>{ </span><span style="color:#bf616a;">writable </span><span>} </span><span style="color:#b48ead;">from </span><span>"</span><span style="color:#a3be8c;">svelte/store</span><span>"
</span><span>
</span><span style="color:#b48ead;">const </span><span style="color:#bf616a;">itemName </span><span>= "</span><span style="color:#a3be8c;">storedArray</span><span>"
</span><span style="color:#b48ead;">const </span><span style="color:#bf616a;">retrieved </span><span>= </span><span style="color:#bf616a;">localStorage</span><span>.</span><span style="color:#96b5b4;">getItem</span><span>(</span><span style="color:#bf616a;">itemName</span><span>)
</span><span style="color:#b48ead;">export const </span><span style="color:#bf616a;">storedArray </span><span>= </span><span style="color:#8fa1b3;">writable</span><span>(</span><span style="color:#bf616a;">parsed </span><span>=== </span><span style="color:#d08770;">null </span><span>? [] : </span><span style="color:#bf616a;">retrieved</span><span>)
</span><span>
</span><span style="color:#bf616a;">storedArray</span><span>.</span><span style="color:#8fa1b3;">subscribe</span><span>(</span><span style="color:#bf616a;">value </span><span style="color:#b48ead;">=> </span><span style="color:#bf616a;">localStorage</span><span>.</span><span style="color:#96b5b4;">setItem</span><span>(</span><span style="color:#bf616a;">itemName</span><span>, </span><span style="color:#bf616a;">retrieved</span><span>))
</span></code></pre>
<p>This approach would work save for the one fact that surprised me when I
have discovered for the first time:</p>
<blockquote>
<p>localStorage does not support <em>arrays</em>, only <em>strings</em></p>
</blockquote>
<p>That's right - localStorage does not support data structures, only scalars.
There are obvious ways around this limitation, once you are aware they
exist. Discovering this was the tricky part, because browser did not throw
any error trying to save array in the localStorage. Inspecting the
localStorage contents via tools the browser provides displayed the values
neatly delimited with a comma, resembling and array very closely.</p>
<h2 id="arrays-in-localstorage">Arrays in localStorage</h2>
<p>Fortunately, this problem not a problem anymore. To convert a data
structure to an string, we can <em>serialize</em> it. Common way to do it in JS is
to convert it to JSON using <code>JSON.stringify()</code>. To revert it back, we use
<code>JSON.parse()</code>.</p>
<pre data-lang="js" style="background-color:#2b303b;color:#c0c5ce;" class="language-js "><code class="language-js" data-lang="js"><span style="color:#b48ead;">import </span><span>{ </span><span style="color:#bf616a;">writable </span><span>} </span><span style="color:#b48ead;">from </span><span>"</span><span style="color:#a3be8c;">svelte/store</span><span>"
</span><span>
</span><span style="color:#b48ead;">const </span><span style="color:#bf616a;">itemName </span><span>= "</span><span style="color:#a3be8c;">storedArray</span><span>"
</span><span style="color:#b48ead;">const </span><span style="color:#bf616a;">retrieved </span><span>= </span><span style="color:#bf616a;">localStorage</span><span>.</span><span style="color:#96b5b4;">getItem</span><span>(</span><span style="color:#bf616a;">itemName</span><span>)
</span><span style="color:#b48ead;">const </span><span style="color:#bf616a;">parsed </span><span>= JSON.</span><span style="color:#96b5b4;">parse</span><span>(</span><span style="color:#bf616a;">retrieved</span><span>)
</span><span style="color:#b48ead;">export const </span><span style="color:#bf616a;">storedArray </span><span>= </span><span style="color:#8fa1b3;">writable</span><span>(</span><span style="color:#bf616a;">parsed </span><span>=== </span><span style="color:#d08770;">null </span><span>? [] : </span><span style="color:#bf616a;">parsed</span><span>)
</span><span>
</span><span style="color:#bf616a;">storedArray</span><span>.</span><span style="color:#8fa1b3;">subscribe</span><span>(</span><span style="color:#bf616a;">value </span><span style="color:#b48ead;">=>
</span><span> </span><span style="color:#bf616a;">localStorage</span><span>.</span><span style="color:#96b5b4;">setItem</span><span>(</span><span style="color:#bf616a;">itemName</span><span>, JSON.</span><span style="color:#96b5b4;">stringify</span><span>(</span><span style="color:#bf616a;">value</span><span>))
</span><span>)
</span></code></pre>
<p>Now you can work with persistent store containing an array in your Svelte
project as well. As a side note, since objects in JS can be serialized into
JSON as well (because JSON is in fact javascript Object Notation, made
specifically for this purpose), this approach can be modified to
accommodate object in localStorage quite easily.</p>
<p>The code which again, due to sandboxing won't work straight in
<a href="https://svelte.dev/repl/8fa73934c69a453881c4d69e33132171?version=3.31.2">REPL</a>
or can be also cloned from the
<a href="https://github.com/peterbabic/sources-peterbabic.dev/tree/master/store-arrays-svelte-localstorage">repository</a>
to make it run locally</p>
<h2 id="father-reading">Father reading</h2>
<ul>
<li><a href="https://svelte.dev/docs#svelte_store">https://svelte.dev/docs#svelte_store</a></li>
<li><a href="https://chasingcode.dev/blog/svelte-persist-state-to-localstorage/">https://chasingcode.dev/blog/svelte-persist-state-to-localstorage/</a></li>
<li><a href="https://www.tutorialspoint.com/What-is-the-difference-between-local-storage-vs-cookies">https://www.tutorialspoint.com/What-is-the-difference-between-local-storage-vs-cookies</a></li>
<li><a href="https://stackoverflow.com/questions/3357553/how-do-i-store-an-array-in-localstorage">https://stackoverflow.com/questions/3357553/how-do-i-store-an-array-in-localstorage</a></li>
<li><a href="https://stackoverflow.com/questions/57089227/inconsistency-when-writing-synchronous-to-localstorage-from-multiple-tabs">https://stackoverflow.com/questions/57089227/inconsistency-when-writing-synchronous-to-localstorage-from-multiple-tabs</a></li>
</ul>
YAML metadata in Markdown2020-12-20T00:00:00+00:002020-12-22T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/yaml-metadata-in-markdown/<p>When I try to make my blog posts from the Markdown documents, I usually
need to put the metadata somewhere. Since there is no database present, one
place to put where they can be put is right into the document.</p>
<p>Metadata are data about the data. In the context of blog posts, metadata
are usually <strong>category</strong>, <strong>tags</strong>, <strong>slug</strong>, <strong>time</strong> or <strong>title</strong>.
Unfortunately, Markdown does not natively support metadata syntax. It is
also important to note that these information should not be rendered in the
final document, but it should be possible to process them during parsing.</p>
<h2 id="markdown-comments">Markdown comments</h2>
<p>The easiest way to accomplish this is to use Markdown comment, for example
to denote <strong>category</strong> like this:</p>
<pre data-lang="md" style="background-color:#2b303b;color:#c0c5ce;" class="language-md "><code class="language-md" data-lang="md"><span style="color:#d08770;">[//]: # </span><span>"</span><span style="color:#bf616a;">programming</span><span>"
</span></code></pre>
<p>However, this approach is not ideal. Since
<a href="https://stackoverflow.com/questions/4823468/comments-in-markdown">there are multiple ways </a>
to represent comment in a Markdown document, the parser doing the Markdown
parsing has to understand all the used comment syntaxes. This is not much
of a problem when only a single person is writing the blog, but could
become problem in the future, should newcomers join the team.</p>
<p>The second problem with the comment approach is that the metadata should be
parsed further. If only simple data are used, for instance just a category,
like in the example above, writing a very naïve parser should not take more
than a few lines and call it done.</p>
<p>The moment we want to insert multiple differing data structures there, like
it may pay off to simply delegate this task to the dedicated parser.
Consider the following example:</p>
<pre data-lang="md" style="background-color:#2b303b;color:#c0c5ce;" class="language-md "><code class="language-md" data-lang="md"><span style="color:#d08770;">[//]: # </span><span>"</span><span style="color:#bf616a;">programming</span><span>"
</span><span style="color:#d08770;">[//]: # </span><span>"</span><span style="color:#bf616a;">yaml, markdown, metadata</span><span>"
</span></code></pre>
<p>Still relatively trivial to create parser.</p>
<ol>
<li>Get everything in the first line after the colon as a <strong>category</strong></li>
<li>Get everything in the second line after the colon and split it by the
comma as <strong>tags</strong></li>
</ol>
<p>But it starts to get pretty complicated too soon. Now the third line is the
<strong>title</strong>. We need to remember to split only the tags by a comma, not the
<strong>title</strong>:</p>
<pre data-lang="md" style="background-color:#2b303b;color:#c0c5ce;" class="language-md "><code class="language-md" data-lang="md"><span style="color:#d08770;">[//]: # </span><span>"</span><span style="color:#bf616a;">programming</span><span>"
</span><span style="color:#d08770;">[//]: # </span><span>"</span><span style="color:#bf616a;">yaml, markdown, metadata</span><span>"
</span><span style="color:#d08770;">[//]: # </span><span>"</span><span style="color:#bf616a;">A simple, yet powerful parser</span><span>"
</span></code></pre>
<p>If we wanted to make order of the metadata irrelevant, we could complicate
it further:</p>
<pre data-lang="md" style="background-color:#2b303b;color:#c0c5ce;" class="language-md "><code class="language-md" data-lang="md"><span style="color:#d08770;">[//]: # </span><span>"</span><span style="color:#bf616a;">Title: A simple, yet powerful parser</span><span>"
</span><span style="color:#d08770;">[//]: # </span><span>"</span><span style="color:#bf616a;">Category: programming</span><span>"
</span><span style="color:#d08770;">[//]: # </span><span>"</span><span style="color:#bf616a;">Tags: yaml, markdown, metadata</span><span>"
</span></code></pre>
<p>But now we need to make sure there are no typos. Title looks similar to the
Title, but now the parser would be confused. It could throw an exception if
all the metadata was required and some of them would be missing, so typos
could still be managed quite well. But then, what if some metadata were
optional? Or worse, what if we wanted custom metadata? How would we denote
if it is a string, like is the case of the <strong>title</strong>, or is it an array,
like is the case with <strong>tags</strong>? What about the security concerns with XSS?
What about testing all this? And so on.</p>
<h2 id="yaml-ain-t-markup-language">YAML Ain't Markup Language</h2>
<p>There is a punch within recursive acronyms. One of the well known ones that
falls to this category is GNU, a recursive acronym for GNU's Not UNIX. As
you can see, the acronym for YAML is also recursive. YAML is not a markup
language, like XML or HTML would be, precisely because it is considered to
be a <em>data-serialization</em> language.</p>
<p>Utilizing YAML parser gets us rid of the problems mentioned before. It is
documented, tested, it's upsides and downsides are known and it's security
considerations are available to read. Using YAML in Markdown to denote
metadata is not a new concept - it is known as <em>Front Matter</em>. In the
Markdown blogs space, it is for instance
<a href="https://jekyllrb.com/docs/front-matter/">used by the Jekyll</a> or
<a href="https://gohugo.io/content-management/front-matter/">even Hugo </a>, among
few. These projects does not use javascript however.</p>
<p>Rewriting the last example into YAML would look like this:</p>
<pre data-lang="yaml" style="background-color:#2b303b;color:#c0c5ce;" class="language-yaml "><code class="language-yaml" data-lang="yaml"><span style="color:#bf616a;">Title</span><span>: "</span><span style="color:#a3be8c;">A simple, yet powerful parser</span><span>"
</span><span style="color:#bf616a;">Category</span><span>: "</span><span style="color:#a3be8c;">programming</span><span>"
</span><span style="color:#bf616a;">Tags</span><span>: ["</span><span style="color:#a3be8c;">yaml</span><span>", "</span><span style="color:#a3be8c;">markdown</span><span>", "</span><span style="color:#a3be8c;">metadata</span><span>"]
</span></code></pre>
<p>To make it a Front Matter YAML in a Markdown, we need to surround it with
the <code>---</code>:</p>
<pre data-lang="md" style="background-color:#2b303b;color:#c0c5ce;" class="language-md "><code class="language-md" data-lang="md"><span style="background-color:#4f5b66;color:#c0c5ce;">---
</span><span>title: "A simple, yet powerful parser"
</span><span>cathegory: "programming"
</span><span>taxonomies:
</span><span> tags: ["markdown", "yaml", "metadata"]
</span><span style="color:#8fa1b3;">---
</span></code></pre>
<p>To parse this kind of document Javascript correctly and easily I have
chosen
<a href="https://www.npmjs.com/package/remark-frontmatter"><code>remark-frontmatter</code></a>
from the Unified ecosystem and
<a href="https://www.npmjs.com/package/js-yaml"><code>js-yaml</code></a> packages. The entire
code would look like this:</p>
<pre data-lang="js" style="background-color:#2b303b;color:#c0c5ce;" class="language-js "><code class="language-js" data-lang="js"><span style="color:#b48ead;">const </span><span style="color:#bf616a;">fs </span><span>= </span><span style="color:#96b5b4;">require</span><span>("</span><span style="color:#a3be8c;">fs</span><span>")
</span><span style="color:#b48ead;">const </span><span style="color:#bf616a;">yaml </span><span>= </span><span style="color:#96b5b4;">require</span><span>("</span><span style="color:#a3be8c;">js-yaml</span><span>")
</span><span style="color:#b48ead;">const </span><span style="color:#bf616a;">unified </span><span>= </span><span style="color:#96b5b4;">require</span><span>("</span><span style="color:#a3be8c;">unified</span><span>")
</span><span style="color:#b48ead;">const </span><span style="color:#bf616a;">parse </span><span>= </span><span style="color:#96b5b4;">require</span><span>("</span><span style="color:#a3be8c;">remark-parse</span><span>")
</span><span style="color:#b48ead;">const </span><span style="color:#bf616a;">stringify </span><span>= </span><span style="color:#96b5b4;">require</span><span>("</span><span style="color:#a3be8c;">remark-stringify</span><span>")
</span><span style="color:#b48ead;">const </span><span style="color:#bf616a;">frontmatter </span><span>= </span><span style="color:#96b5b4;">require</span><span>("</span><span style="color:#a3be8c;">remark-frontmatter</span><span>")
</span><span style="color:#b48ead;">const </span><span style="color:#bf616a;">select </span><span>= </span><span style="color:#96b5b4;">require</span><span>("</span><span style="color:#a3be8c;">unist-util-select</span><span>").</span><span style="color:#bf616a;">select
</span><span>
</span><span style="color:#b48ead;">let </span><span style="color:#bf616a;">tree
</span><span>
</span><span style="color:#8fa1b3;">unified</span><span>()
</span><span> .</span><span style="color:#8fa1b3;">use</span><span>(</span><span style="color:#bf616a;">parse</span><span>)
</span><span> .</span><span style="color:#8fa1b3;">use</span><span>(</span><span style="color:#bf616a;">stringify</span><span>)
</span><span> .</span><span style="color:#8fa1b3;">use</span><span>(</span><span style="color:#bf616a;">frontmatter</span><span>, ["</span><span style="color:#a3be8c;">yaml</span><span>"])
</span><span> .</span><span style="color:#8fa1b3;">use</span><span>(() </span><span style="color:#b48ead;">=> </span><span style="color:#bf616a;">t </span><span style="color:#b48ead;">=> </span><span>(</span><span style="color:#bf616a;">tree </span><span>= </span><span style="color:#bf616a;">t</span><span>))
</span><span> .</span><span style="color:#8fa1b3;">process</span><span>(</span><span style="color:#bf616a;">fs</span><span>.</span><span style="color:#8fa1b3;">readFileSync</span><span>("</span><span style="color:#a3be8c;">example.md</span><span>"))
</span><span>
</span><span style="color:#b48ead;">const </span><span style="color:#bf616a;">yamlNode </span><span>= </span><span style="color:#8fa1b3;">select</span><span>("</span><span style="color:#a3be8c;">yaml</span><span>", </span><span style="color:#bf616a;">tree</span><span>)
</span><span style="color:#b48ead;">const </span><span style="color:#bf616a;">parsedYaml </span><span>= </span><span style="color:#bf616a;">yaml</span><span>.</span><span style="color:#8fa1b3;">safeLoad</span><span>(</span><span style="color:#bf616a;">yamlNode</span><span>.value)
</span><span>
</span><span>module.exports = </span><span style="color:#bf616a;">parsedYaml
</span></code></pre>
<p>The sources are available in the
<a href="https://github.com/peterbabic/sources-peterbabic.dev/tree/master/yaml-metadata-in-markdown">repository</a></p>
Comments working using vim in Svelte2020-12-19T00:00:00+00:002020-12-22T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/working-comments-vim-svelte/<p>Here's a minimal vim settings that I have tweaked to make the comments work
using vim in Svelte</p>
<pre data-lang="vim" style="background-color:#2b303b;color:#c0c5ce;" class="language-vim "><code class="language-vim" data-lang="vim"><span>call </span><span style="color:#8fa1b3;">plug#begin</span><span>(</span><span style="color:#a3be8c;">'~/.vim/plugged'</span><span>)
</span><span> Plug </span><span style="color:#a3be8c;">'neoclide/coc.nvim'</span><span>, {</span><span style="color:#a3be8c;">'branch'</span><span>: </span><span style="color:#a3be8c;">'release'</span><span>}
</span><span> Plug </span><span style="color:#a3be8c;">'evanleck/vim-svelte'
</span><span> Plug </span><span style="color:#a3be8c;">'Shougo/context_filetype.vim'
</span><span> Plug </span><span style="color:#a3be8c;">'preservim/nerdcommenter'
</span><span>call </span><span style="color:#8fa1b3;">plug#end</span><span>()
</span><span>
</span><span style="color:#65737e;">" Settings: context_filetype
</span><span>
</span><span> </span><span style="color:#b48ead;">if</span><span> !</span><span style="color:#8fa1b3;">exists</span><span>(</span><span style="color:#a3be8c;">'g:context_filetype#same_filetypes'</span><span>)
</span><span> </span><span style="color:#96b5b4;">let </span><span style="color:#bf616a;">g:context_filetype</span><span>#filetypes = {}
</span><span> </span><span style="color:#b48ead;">endif
</span><span>
</span><span> </span><span style="color:#96b5b4;">let </span><span style="color:#bf616a;">g:context_filetype</span><span>#filetypes</span><span style="color:#b48ead;">.</span><span>svelte =
</span><span> </span><span style="color:#b48ead;">\</span><span> [
</span><span> </span><span style="color:#b48ead;">\</span><span> {</span><span style="color:#a3be8c;">'filetype'</span><span> : </span><span style="color:#a3be8c;">'javascript'</span><span>, </span><span style="color:#a3be8c;">'start'</span><span> : </span><span style="color:#a3be8c;">'<script \?.*>'</span><span>, </span><span style="color:#a3be8c;">'end'</span><span> : </span><span style="color:#a3be8c;">'</script>'</span><span>},
</span><span> </span><span style="color:#b48ead;">\</span><span> {
</span><span> </span><span style="color:#b48ead;">\ </span><span style="color:#a3be8c;">'filetype'</span><span>: </span><span style="color:#a3be8c;">'typescript'</span><span>,
</span><span> </span><span style="color:#b48ead;">\ </span><span style="color:#a3be8c;">'start'</span><span>: </span><span style="color:#a3be8c;">'<script\%( [^>]*\)\? \%(ts\|lang="\%(ts\|typescript\)"\)\%( [^>]*\)\?>'</span><span>,
</span><span> </span><span style="color:#b48ead;">\ </span><span style="color:#a3be8c;">'end'</span><span>: </span><span style="color:#a3be8c;">''</span><span>,
</span><span> </span><span style="color:#b48ead;">\</span><span> },
</span><span> </span><span style="color:#b48ead;">\</span><span> {</span><span style="color:#a3be8c;">'filetype'</span><span> : </span><span style="color:#a3be8c;">'css'</span><span>, </span><span style="color:#a3be8c;">'start'</span><span> : </span><span style="color:#a3be8c;">'<style \?.*>'</span><span>, </span><span style="color:#a3be8c;">'end'</span><span> : </span><span style="color:#a3be8c;">'</style>'</span><span>},
</span><span> </span><span style="color:#b48ead;">\</span><span> ]
</span><span>
</span><span> </span><span style="color:#96b5b4;">let </span><span style="color:#bf616a;">g:ft</span><span> = </span><span style="color:#a3be8c;">''
</span><span>
</span><span style="color:#65737e;">" " Settings: NERDCommenter
</span><span>
</span><span style="color:#96b5b4;">let </span><span style="color:#bf616a;">g:NERDCustomDelimiters</span><span> = { </span><span style="color:#a3be8c;">'html'</span><span>: { </span><span style="color:#a3be8c;">'left'</span><span>: </span><span style="color:#a3be8c;">'<!--'</span><span>, </span><span style="color:#a3be8c;">'right'</span><span>: </span><span style="color:#a3be8c;">'-->'</span><span> } }
</span><span>
</span><span style="color:#b48ead;">fu</span><span>! </span><span style="color:#8fa1b3;">NERDCommenter_before</span><span>()
</span><span> </span><span style="color:#b48ead;">if</span><span> (</span><span style="color:#bf616a;">&ft </span><span style="color:#b48ead;">== </span><span style="color:#a3be8c;">'html'</span><span>) </span><span style="color:#b48ead;">||</span><span> (</span><span style="color:#bf616a;">&ft </span><span style="color:#b48ead;">== </span><span style="color:#a3be8c;">'svelte'</span><span>)
</span><span> </span><span style="color:#96b5b4;">let </span><span style="color:#bf616a;">g:ft</span><span> = </span><span style="color:#bf616a;">&ft
</span><span> </span><span style="color:#96b5b4;">let</span><span> cfts = </span><span style="color:#8fa1b3;">context_filetype#get_filetypes</span><span>()
</span><span> </span><span style="color:#b48ead;">if </span><span style="color:#8fa1b3;">len</span><span>(cfts) > </span><span style="color:#d08770;">0
</span><span> </span><span style="color:#b48ead;">if</span><span> cfts[</span><span style="color:#d08770;">0</span><span>] </span><span style="color:#b48ead;">== </span><span style="color:#a3be8c;">'svelte'
</span><span> </span><span style="color:#96b5b4;">let</span><span> cft = </span><span style="color:#a3be8c;">'html'
</span><span> </span><span style="color:#b48ead;">elseif</span><span> cfts[</span><span style="color:#d08770;">0</span><span>] </span><span style="color:#b48ead;">== </span><span style="color:#a3be8c;">'scss'
</span><span> </span><span style="color:#96b5b4;">let</span><span> cft = </span><span style="color:#a3be8c;">'css'
</span><span> </span><span style="color:#b48ead;">else
</span><span> </span><span style="color:#96b5b4;">let</span><span> cft = cfts[</span><span style="color:#d08770;">0</span><span>]
</span><span> </span><span style="color:#b48ead;">endif
</span><span> </span><span style="color:#96b5b4;">exe </span><span style="color:#a3be8c;">'setf ' </span><span style="color:#b48ead;">.</span><span> cft
</span><span> </span><span style="color:#b48ead;">endif
</span><span> </span><span style="color:#b48ead;">endif
</span><span style="color:#b48ead;">endfu
</span><span>
</span><span style="color:#b48ead;">fu</span><span>! </span><span style="color:#8fa1b3;">NERDCommenter_after</span><span>()
</span><span> </span><span style="color:#b48ead;">if</span><span> (</span><span style="color:#bf616a;">g:ft </span><span style="color:#b48ead;">== </span><span style="color:#a3be8c;">'html'</span><span>) </span><span style="color:#b48ead;">||</span><span> (</span><span style="color:#bf616a;">g:ft </span><span style="color:#b48ead;">== </span><span style="color:#a3be8c;">'svelte'</span><span>)
</span><span> </span><span style="color:#96b5b4;">exec </span><span style="color:#a3be8c;">'setf ' </span><span style="color:#b48ead;">. </span><span style="color:#bf616a;">g:ft
</span><span> </span><span style="color:#96b5b4;">let </span><span style="color:#bf616a;">g:ft</span><span> = </span><span style="color:#a3be8c;">''
</span><span> </span><span style="color:#b48ead;">endif
</span><span style="color:#b48ead;">endfu
</span></code></pre>
<p>Works correctly for template (HTML), CSS, javascript and Typescript</p>
<p>Also works with Sapper's module context script</p>
<pre data-lang="js" style="background-color:#2b303b;color:#c0c5ce;" class="language-js "><code class="language-js" data-lang="js"><span><</span><span style="color:#bf616a;">script context</span><span>="</span><span style="color:#a3be8c;">module</span><span>">
</span></code></pre>
<h2 id="references">References</h2>
<ul>
<li><a href="https://gist.github.com/knopki/d05e76c40c2c06a09ffe2ef4f76365f4">https://gist.github.com/knopki/d05e76c40c2c06a09ffe2ef4f76365f4</a></li>
<li><a href="https://codechips.me/vim-setup-for-svelte-development/">https://codechips.me/vim-setup-for-svelte-development/</a></li>
</ul>
Using CSS selectors on Markdown in JS2020-12-15T00:00:00+00:002020-12-22T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/using-css-selectors-on-markdown/<p>It is possible target specific elements in a DOM via CSS
<a href="https://www.w3schools.com/cssref/css_selectors.asp">using selectors </a></p>
<pre data-lang="css" style="background-color:#2b303b;color:#c0c5ce;" class="language-css "><code class="language-css" data-lang="css"><span style="color:#bf616a;">h2 </span><span>{
</span><span> </span><span style="color:#65737e;">/* property: value; */
</span><span>}
</span></code></pre>
<p>It is also possible to
<a href="https://developer.mozilla.org/en-US/docs/Web/API/Document_object_model/Locating_DOM_elements_using_selectors">use CSS selector in JS DOM </a></p>
<pre data-lang="js" style="background-color:#2b303b;color:#c0c5ce;" class="language-js "><code class="language-js" data-lang="js"><span style="color:#b48ead;">const </span><span style="color:#bf616a;">elements </span><span>= document.</span><span style="color:#96b5b4;">querySelector</span><span>("</span><span style="color:#a3be8c;">h2</span><span>")
</span></code></pre>
<p>With the advent of a JAMstack it is also possible to target Markdown
elements using CSS selectors</p>
<ul>
<li>Init the project</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">npm</span><span> init</span><span style="color:#bf616a;"> -y
</span></code></pre>
<ul>
<li>In <code>package.json</code> change type to ESM to
<a href="https://nodejs.org/docs/latest-v13.x/api/esm.html#esm_enabling">enable import statement </a></li>
</ul>
<pre data-lang="json" style="background-color:#2b303b;color:#c0c5ce;" class="language-json "><code class="language-json" data-lang="json"><span>{
</span><span> </span><span style="background-color:#bf616a;color:#2b303b;">...</span><span>
</span><span> "</span><span style="color:#a3be8c;">type</span><span>": "</span><span style="color:#a3be8c;">module</span><span>"
</span><span>}
</span></code></pre>
<p>Alternatively, enable it via the
<a href="https://stackoverflow.com/questions/25329241/edit-package-json-from-command-line">command line </a></p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">npx</span><span> json</span><span style="color:#bf616a;"> -I -f</span><span> package.json</span><span style="color:#bf616a;"> -e </span><span>'</span><span style="color:#a3be8c;">this.type="module"</span><span>'
</span></code></pre>
<ul>
<li>Install required dependencies</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">npm</span><span> i unified remark-parse remark-stringify unist-util-select
</span></code></pre>
<ul>
<li>Create the <code>post.md</code> file</li>
</ul>
<pre data-lang="markdown" style="background-color:#2b303b;color:#c0c5ce;" class="language-markdown "><code class="language-markdown" data-lang="markdown"><span style="color:#d08770;">[//]: # </span><span>"</span><span style="color:#bf616a;">This is a comment</span><span>"
</span><span>
</span><span style="color:#8fa1b3;">## Second level heading A
</span><span>
</span><span>Paragraph
</span><span>
</span><span style="color:#8fa1b3;">## Second level heading B
</span></code></pre>
<ul>
<li>Create the <code>index.js</code> script</li>
</ul>
<pre data-lang="js" style="background-color:#2b303b;color:#c0c5ce;" class="language-js "><code class="language-js" data-lang="js"><span style="color:#b48ead;">import </span><span style="color:#bf616a;">fs </span><span style="color:#b48ead;">from </span><span>"</span><span style="color:#a3be8c;">fs</span><span>"
</span><span style="color:#b48ead;">import </span><span style="color:#bf616a;">markdown </span><span style="color:#b48ead;">from </span><span>"</span><span style="color:#a3be8c;">remark-parse</span><span>"
</span><span style="color:#b48ead;">import </span><span style="color:#bf616a;">stringify </span><span style="color:#b48ead;">from </span><span>"</span><span style="color:#a3be8c;">remark-stringify</span><span>"
</span><span style="color:#b48ead;">import </span><span style="color:#bf616a;">unified </span><span style="color:#b48ead;">from </span><span>"</span><span style="color:#a3be8c;">unified</span><span>"
</span><span style="color:#b48ead;">import </span><span style="color:#bf616a;">util </span><span style="color:#b48ead;">from </span><span>"</span><span style="color:#a3be8c;">unist-util-select</span><span>"
</span><span style="color:#b48ead;">const </span><span>{ </span><span style="color:#bf616a;">selectAll </span><span>} = </span><span style="color:#bf616a;">util
</span><span>
</span><span style="color:#b48ead;">let </span><span style="color:#bf616a;">mdast
</span><span style="color:#8fa1b3;">unified</span><span>()
</span><span> .</span><span style="color:#8fa1b3;">use</span><span>(</span><span style="color:#bf616a;">markdown</span><span>)
</span><span> .</span><span style="color:#8fa1b3;">use</span><span>(() </span><span style="color:#b48ead;">=> </span><span style="color:#bf616a;">tree </span><span style="color:#b48ead;">=> </span><span>(</span><span style="color:#bf616a;">mdast </span><span>= </span><span style="color:#bf616a;">tree</span><span>))
</span><span> .</span><span style="color:#8fa1b3;">use</span><span>(</span><span style="color:#bf616a;">stringify</span><span>)
</span><span> .</span><span style="color:#8fa1b3;">process</span><span>(</span><span style="color:#bf616a;">fs</span><span>.</span><span style="color:#8fa1b3;">readFileSync</span><span>("</span><span style="color:#a3be8c;">post.md</span><span>"))
</span><span>
</span><span style="color:#b48ead;">const </span><span style="color:#bf616a;">headingsNodes </span><span>= </span><span style="color:#8fa1b3;">selectAll</span><span>("</span><span style="color:#a3be8c;">heading[depth=2]</span><span>", </span><span style="color:#bf616a;">mdast</span><span>)
</span><span style="color:#b48ead;">const </span><span style="color:#bf616a;">json </span><span>= JSON.</span><span style="color:#96b5b4;">stringify</span><span>(</span><span style="color:#bf616a;">headingsNodes</span><span>, </span><span style="color:#d08770;">null</span><span>, </span><span style="color:#d08770;">2</span><span>)
</span><span>
</span><span style="color:#ebcb8b;">console</span><span>.</span><span style="color:#96b5b4;">log</span><span>(</span><span style="color:#bf616a;">json</span><span>)
</span></code></pre>
<ul>
<li>Running the script</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">node</span><span> index.js
</span></code></pre>
<p>Prints an array containing both level 2 headings as
<a href="https://github.com/syntax-tree/mdast">mdast tree nodes </a></p>
<pre data-lang="json" style="background-color:#2b303b;color:#c0c5ce;" class="language-json "><code class="language-json" data-lang="json"><span>[
</span><span> {
</span><span> "</span><span style="color:#a3be8c;">type</span><span>": "</span><span style="color:#a3be8c;">heading</span><span>",
</span><span> "</span><span style="color:#a3be8c;">depth</span><span>": </span><span style="color:#d08770;">2</span><span>,
</span><span> "</span><span style="color:#a3be8c;">children</span><span>": [
</span><span> {
</span><span> "</span><span style="color:#a3be8c;">type</span><span>": "</span><span style="color:#a3be8c;">text</span><span>",
</span><span> "</span><span style="color:#a3be8c;">value</span><span>": "</span><span style="color:#a3be8c;">Second level heading A</span><span>",
</span><span> </span><span style="background-color:#bf616a;color:#2b303b;">...</span><span>
</span><span> }
</span><span> ],
</span><span> </span><span style="background-color:#bf616a;color:#2b303b;">...</span><span>
</span><span> },
</span><span> {
</span><span> "</span><span style="color:#a3be8c;">type</span><span>": "</span><span style="color:#a3be8c;">heading</span><span>",
</span><span> "</span><span style="color:#a3be8c;">depth</span><span>": </span><span style="color:#d08770;">2</span><span>,
</span><span> "</span><span style="color:#a3be8c;">children</span><span>": [
</span><span> {
</span><span> "</span><span style="color:#a3be8c;">type</span><span>": "</span><span style="color:#a3be8c;">text</span><span>",
</span><span> "</span><span style="color:#a3be8c;">value</span><span>": "</span><span style="color:#a3be8c;">Second level heading B</span><span>",
</span><span> </span><span style="background-color:#bf616a;color:#2b303b;">...</span><span>
</span><span> }
</span><span> ],
</span><span> </span><span style="background-color:#bf616a;color:#2b303b;">...</span><span>
</span><span> }
</span><span>]
</span></code></pre>
<p>The magic is happening because of <code>selectAll</code> function from the
<a href="https://github.com/syntax-tree/unist-util-select"><code>unist-util-select</code> package </a></p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">const</span><span> headingsNodes = selectAll("</span><span style="color:#a3be8c;">heading[depth=2]</span><span>", mdast)
</span></code></pre>
<p>Sources are available in the
<a href="https://github.com/peterbabic/sources-peterbabic.dev/tree/master/using-css-selectors-on-markdown">repository</a></p>
Don't use global npm config for dotfiles with nvm2020-12-10T00:00:00+00:002020-12-22T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/storing-npm-config-dotfiles-when-using-nvm/<p>It is possible to set the global npm config via the <code>global</code> switch</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#65737e;"># short version
</span><span style="color:#bf616a;">npm</span><span> config set init-version "</span><span style="color:#a3be8c;">0.0.1</span><span>"</span><span style="color:#bf616a;"> -g
</span><span>
</span><span style="color:#65737e;"># long version
</span><span style="color:#bf616a;">npm</span><span> config set init-version "</span><span style="color:#a3be8c;">0.0.1</span><span>"</span><span style="color:#bf616a;"> --global
</span></code></pre>
<p>The location of the global npm config in nvm is tied to node version,
rendering it unsuitable for dotfiles</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">$</span><span> npm config get prefix
</span><span style="color:#bf616a;">/home/peterbabic/.nvm/versions/node/v15.4.0
</span></code></pre>
<p>The actual file location is thus
<code>{prefix}/etc/npmrc</code>^[<a href="https://docs.npmjs.com/cli/v6/using-npm/config#globalconfig">https://docs.npmjs.com/cli/v6/using-npm/config#globalconfig</a>]</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>/home/peterbabic/.nvm/versions/node/v15.4.0/etc/npmrc
</span></code></pre>
<p>When installing new node version with nvm, the config file has to be copied
over</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>nvm install 15.4
</span><span>cp ~/.nvm/versions/node/v14.9.0/etc/npmrc ~/.nvm/versions/node/v15.4.0/etc/npmrc
</span></code></pre>
<h2 id="not-using-global-setup">Not using global setup</h2>
<p>Here's how I store npm config among the dotfiles, using so called
<code>userconfig</code>
<a href="https://docs.npmjs.com/cli/v6/using-npm/config#npmrc-files">instead of a global config </a></p>
<ul>
<li>Install nvm</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -S</span><span> nvm
</span></code></pre>
<ul>
<li>Install node version of your liking, i.e. stable
release^[<a href="https://github.com/nvm-sh/nvm#usage">https://github.com/nvm-sh/nvm#usage</a>]</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">nvm</span><span> install stable
</span></code></pre>
<ul>
<li>Configure the init values</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">npm</span><span> config set init-version "</span><span style="color:#a3be8c;">0.0.1</span><span>"
</span><span style="color:#bf616a;">npm</span><span> config set init-author-email "</span><span style="color:#a3be8c;">peter@peterbabic.dev</span><span>"
</span><span style="color:#bf616a;">npm</span><span> config set init-author-name "</span><span style="color:#a3be8c;">Peter Babič</span><span>"
</span><span style="color:#bf616a;">npm</span><span> config set init-license "</span><span style="color:#a3be8c;">MIT</span><span>"
</span><span style="color:#bf616a;">npm</span><span> config set init-author-url "</span><span style="color:#a3be8c;">https://peterbabic.dev</span><span>"
</span></code></pre>
<p>Alternatively, paste the values into <code>~/.npmrc</code> manually</p>
<pre data-lang="ini" style="background-color:#2b303b;color:#c0c5ce;" class="language-ini "><code class="language-ini" data-lang="ini"><span style="color:#bf616a;">init-author-name</span><span>=Peter Babič
</span><span style="color:#bf616a;">init-version</span><span>=</span><span style="color:#d08770;">0</span><span>.</span><span style="color:#d08770;">0</span><span>.</span><span style="color:#d08770;">1
</span><span style="color:#bf616a;">init-author-email</span><span>=peter</span><span style="color:#b48ead;">@peterbabic</span><span>.dev
</span><span style="color:#bf616a;">init-license</span><span>=MIT
</span><span style="color:#bf616a;">init-author-url</span><span>=</span><span style="color:#d08770;">https://peterbabic.dev
</span></code></pre>
<ul>
<li>Store the file among your dotfiles, i.e.
<a href="https://yadm.io/docs/getting_started">via yadm</a></li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">yadm</span><span> add </span><span style="color:#bf616a;">~</span><span>/.npmrc && </span><span style="color:#bf616a;">yadm</span><span> commit
</span></code></pre>
<ul>
<li>Initialize the project prompting the values</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">npm</span><span> init</span><span style="color:#bf616a;"> -y
</span></code></pre>
<p>Produces the pre-configured <code>package.json</code> file straight away, <strong>saving
time</strong></p>
<pre data-lang="json" style="background-color:#2b303b;color:#c0c5ce;" class="language-json "><code class="language-json" data-lang="json"><span>{
</span><span> "</span><span style="color:#a3be8c;">name</span><span>": "</span><span style="color:#a3be8c;">project</span><span>",
</span><span> "</span><span style="color:#a3be8c;">version</span><span>": "</span><span style="color:#a3be8c;">0.0.1</span><span>",
</span><span> "</span><span style="color:#a3be8c;">description</span><span>": "",
</span><span> "</span><span style="color:#a3be8c;">main</span><span>": "</span><span style="color:#a3be8c;">index.js</span><span>",
</span><span> "</span><span style="color:#a3be8c;">scripts</span><span>": {
</span><span> "</span><span style="color:#a3be8c;">test</span><span>": "</span><span style="color:#a3be8c;">echo </span><span style="color:#96b5b4;">\"</span><span style="color:#a3be8c;">Error: no test specified</span><span style="color:#96b5b4;">\"</span><span style="color:#a3be8c;"> && exit 1</span><span>"
</span><span> },
</span><span> "</span><span style="color:#a3be8c;">keywords</span><span>": [],
</span><span> "</span><span style="color:#a3be8c;">author</span><span>": "</span><span style="color:#a3be8c;">Peter Babič <peter@peterbabic.dev> (https://peterbabic.dev/)</span><span>",
</span><span> "</span><span style="color:#a3be8c;">license</span><span>": "</span><span style="color:#a3be8c;">MIT</span><span>"
</span><span>}
</span></code></pre>
<p>Used versions for the completeness</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>$ nvm --version
</span><span>0.35.2
</span><span>
</span><span>$ npm --version
</span><span>7.0.15
</span><span>
</span><span>$ node --version
</span><span>v15.4.0
</span><span>
</span><span>$ yay -Qi yadm | grep Version
</span><span>Version : 2.5.0-1
</span></code></pre>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://codeburst.io/setting-global-npm-defaults-for-quick-starting-new-projects-ed06ed22edb3">https://codeburst.io/setting-global-npm-defaults-for-quick-starting-new-projects-ed06ed22edb3</a></li>
<li><a href="https://stackabuse.com/the-ultimate-guide-to-configuring-npm/">https://stackabuse.com/the-ultimate-guide-to-configuring-npm/</a></li>
<li><a href="https://stackoverflow.com/questions/34718528/nvm-is-not-compatible-with-the-npm-config-prefix-option">https://stackoverflow.com/questions/34718528/nvm-is-not-compatible-with-the-npm-config-prefix-option</a></li>
</ul>
How to assert sorted dates in Cypress2020-12-08T00:00:00+00:002020-12-22T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/how-to-assert-sorted-dates-cypress/<p>Here's how I use Cypress to assert that the user interface I am building
displays datetime information sorted chronologically. Consider the list of
dates served at the port 3000 that looks like this:</p>
<pre data-lang="html" style="background-color:#2b303b;color:#c0c5ce;" class="language-html "><code class="language-html" data-lang="html"><span><</span><span style="color:#bf616a;">ul </span><span style="color:#8fa1b3;">id</span><span>="</span><span style="color:#a3be8c;">sorted</span><span>">
</span><span> <</span><span style="color:#bf616a;">li</span><span>>14.12.1999</</span><span style="color:#bf616a;">li</span><span>>
</span><span> <</span><span style="color:#bf616a;">li</span><span>>12.03.1975</</span><span style="color:#bf616a;">li</span><span>>
</span><span> <</span><span style="color:#bf616a;">li</span><span>>28.02.2001</</span><span style="color:#bf616a;">li</span><span>>
</span><span> <</span><span style="color:#bf616a;">li</span><span>>20.08.2010</</span><span style="color:#bf616a;">li</span><span>>
</span><span> <</span><span style="color:#bf616a;">li</span><span>>05.07.2018</</span><span style="color:#bf616a;">li</span><span>>
</span><span></</span><span style="color:#bf616a;">ul</span><span>>
</span></code></pre>
<p>I live in Europe and the format we write dates in is <strong>dd.mm.yyyy</strong></p>
<ul>
<li>Start by installing Cypress dev dependencies</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">npm</span><span> i</span><span style="color:#bf616a;"> -D</span><span> cypress start-server-and-test date-fns
</span></code></pre>
<ul>
<li>Create <code>cypress.json</code> and make sure the port matches your app's port</li>
</ul>
<pre data-lang="json" style="background-color:#2b303b;color:#c0c5ce;" class="language-json "><code class="language-json" data-lang="json"><span>{
</span><span> "</span><span style="color:#a3be8c;">baseUrl</span><span>": "</span><span style="color:#a3be8c;">http://localhost:3000</span><span>"
</span><span>}
</span></code></pre>
<ul>
<li>Edit <code>package.json</code> and suit it to your needs by also matching the app's
port</li>
</ul>
<pre data-lang="json" style="background-color:#2b303b;color:#c0c5ce;" class="language-json "><code class="language-json" data-lang="json"><span>"</span><span style="color:#a3be8c;">scripts</span><span>": {
</span><span> "</span><span style="color:#a3be8c;">dev</span><span>": "</span><span style="color:#a3be8c;">node .</span><span>",
</span><span> "</span><span style="color:#a3be8c;">cy:run</span><span>": "</span><span style="color:#a3be8c;">cypress run</span><span>",
</span><span> "</span><span style="color:#a3be8c;">test</span><span>": "</span><span style="color:#a3be8c;">start-test dev 3000 cy:run</span><span>"
</span><span> }
</span></code></pre>
<ul>
<li>Initialize Cypress</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">npx</span><span> cypress run && </span><span style="color:#bf616a;">mkdir -p</span><span> cypress/integration
</span></code></pre>
<ul>
<li>Create the test at <code>cypress/integration/spec.js</code></li>
</ul>
<pre data-lang="javascript" style="background-color:#2b303b;color:#c0c5ce;" class="language-javascript "><code class="language-javascript" data-lang="javascript"><span style="color:#65737e;">/// <</span><span style="color:#bf616a;">reference </span><span style="color:#d08770;">types</span><span>="</span><span style="color:#a3be8c;">cypress</span><span>" </span><span style="color:#65737e;">/>
</span><span style="color:#b48ead;">import </span><span>{ </span><span style="color:#bf616a;">parse </span><span>} </span><span style="color:#b48ead;">from </span><span>"</span><span style="color:#a3be8c;">date-fns</span><span>"
</span><span>
</span><span style="color:#8fa1b3;">describe</span><span>("</span><span style="color:#a3be8c;">Date list should</span><span>", () </span><span style="color:#b48ead;">=> </span><span>{
</span><span> </span><span style="color:#8fa1b3;">it</span><span>("</span><span style="color:#a3be8c;">have dates sorted chronologically</span><span>", () </span><span style="color:#b48ead;">=> </span><span>{
</span><span> </span><span style="color:#bf616a;">cy</span><span>.</span><span style="color:#8fa1b3;">visit</span><span>("</span><span style="color:#a3be8c;">/</span><span>")
</span><span>
</span><span> </span><span style="color:#b48ead;">const </span><span style="color:#8fa1b3;">parseDate </span><span>= </span><span style="color:#bf616a;">date </span><span style="color:#b48ead;">=> </span><span style="color:#8fa1b3;">parse</span><span>(</span><span style="color:#bf616a;">date</span><span>, "</span><span style="color:#a3be8c;">dd.MM.yyyy</span><span>", new Date())
</span><span> </span><span style="color:#b48ead;">let </span><span style="color:#bf616a;">prevDate </span><span>= </span><span style="color:#8fa1b3;">parseDate</span><span>("</span><span style="color:#a3be8c;">01.01.1970</span><span>")
</span><span>
</span><span> </span><span style="color:#bf616a;">cy</span><span>.</span><span style="color:#96b5b4;">get</span><span>("</span><span style="color:#a3be8c;">ul#sorted li</span><span>").</span><span style="color:#8fa1b3;">each</span><span>(</span><span style="color:#bf616a;">$pre </span><span style="color:#b48ead;">=> </span><span>{
</span><span> </span><span style="color:#b48ead;">const </span><span style="color:#bf616a;">currentDate </span><span>= </span><span style="color:#8fa1b3;">parseDate</span><span>(</span><span style="color:#bf616a;">$pre</span><span>.</span><span style="color:#8fa1b3;">text</span><span>())
</span><span> </span><span style="color:#8fa1b3;">expect</span><span>(</span><span style="color:#bf616a;">prevDate</span><span>).</span><span style="color:#bf616a;">to</span><span>.</span><span style="color:#bf616a;">be</span><span>.</span><span style="color:#8fa1b3;">lte</span><span>(</span><span style="color:#bf616a;">currentDate</span><span>)
</span><span>
</span><span> </span><span style="color:#bf616a;">prevDate </span><span>= </span><span style="color:#bf616a;">currentDate
</span><span> })
</span><span> })
</span><span>})
</span></code></pre>
<ul>
<li>
<p>Note that <code>Cypress.moment</code>
<a href="https://github.com/cypress-io/cypress/issues/8714">being deprecated</a> as
of
<a href="https://github.com/cypress-io/cypress/releases/tag/v6.1.0">Cypress v6.1.0</a>
is the reason <code>date-fns</code> was used</p>
</li>
<li>
<p>Run the test</p>
</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">npm</span><span> run test
</span></code></pre>
<p>Done!</p>
<p>The sources are available in the
<a href="https://github.com/peterbabic/sources-peterbabic.dev/tree/master/how-to-assert-sorted-dates-cypress">repository</a></p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://www.npmjs.com/package/start-server-and-test#alias">https://www.npmjs.com/package/start-server-and-test#alias</a></li>
<li><a href="https://date-fns.org/v2.16.1/docs/parse">https://date-fns.org/v2.16.1/docs/parse</a></li>
</ul>
Following file renames in gitlog2020-12-04T00:00:00+00:002020-12-22T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/following-renames-in-gitlog/<p>After my <a href="/blog/following-renames-in-gitlog/">previous attempt</a> to get
published date and the edited date of the post that lives entirely in the
git somehow reached the dead end because I could not reliably find out how
to handle renames, I have finally found a working way.</p>
<ul>
<li>Start by preparing a file with a git history, containing a rename</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> log</span><span style="color:#bf616a;"> --follow --name-status</span><span> renamed-blog-post.md
</span></code></pre>
<p>Note the <strong>follow</strong> parameter, which helps producing output that might look
like this</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>commit 48831b93a453f7c88838620509ccae6f9feaf851 (HEAD -> master)
</span><span>Author: Peter Babič <peter@peterbabic.dev>
</span><span>Date: Thu Dec 3 22:07:31 2020 +0100
</span><span>
</span><span> add additional sentence to to blog post
</span><span>
</span><span>M renamed-blog-post.md
</span><span>
</span><span>commit f6732cbfb7d787f62190b983f73901dd05f749e5
</span><span>Author: Peter Babič <peter@peterbabic.dev>
</span><span>Date: Thu Dec 3 21:51:19 2020 +0100
</span><span>
</span><span> insert a chapter into post
</span><span>
</span><span>M renamed-blog-post.md
</span><span>
</span><span>commit 70955f7c2ecdec469226f8226a10ad313497972e
</span><span>Author: Peter Babič <peter@peterbabic.dev>
</span><span>Date: Thu Dec 3 21:49:27 2020 +0100
</span><span>
</span><span> rename blog post
</span><span>
</span><span>R100 blog-post.md renamed-blog-post.md
</span><span>
</span><span>commit 86b45b4a5a7aee4726834e70f0ede60ac961abc5
</span><span>Author: Peter Babič <peter@peterbabic.dev>
</span><span>Date: Thu Dec 3 20:54:27 2020 +0100
</span><span>
</span><span> insert blog post file to track
</span><span>
</span><span>A blog-post.md
</span></code></pre>
<p>The goal is to have the Dates accessible in the Typescript</p>
<ul>
<li>Install required packages</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">npm</span><span> install gitlog date-fns
</span></code></pre>
<ul>
<li>Install required dev packages</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">npm</span><span> install</span><span style="color:#bf616a;"> -D</span><span> typescript ts-node-dev
</span></code></pre>
<ul>
<li>The minimal <code>tsconfig.json</code> that worked for me</li>
</ul>
<pre data-lang="json" style="background-color:#2b303b;color:#c0c5ce;" class="language-json "><code class="language-json" data-lang="json"><span>{
</span><span> "</span><span style="color:#a3be8c;">compilerOptions</span><span>": {
</span><span> "</span><span style="color:#a3be8c;">target</span><span>": "</span><span style="color:#a3be8c;">es6</span><span>",
</span><span> "</span><span style="color:#a3be8c;">module</span><span>": "</span><span style="color:#a3be8c;">commonjs</span><span>",
</span><span> "</span><span style="color:#a3be8c;">lib</span><span>": ["</span><span style="color:#a3be8c;">ES2017</span><span>", "</span><span style="color:#a3be8c;">DOM</span><span>"]
</span><span> }
</span><span>}
</span></code></pre>
<ul>
<li>The minimal code for <code>server.ts</code> looks like this</li>
</ul>
<pre data-lang="ts" style="background-color:#2b303b;color:#c0c5ce;" class="language-ts "><code class="language-ts" data-lang="ts"><span style="color:#b48ead;">import </span><span style="color:#bf616a;">gitlog</span><span>, { </span><span style="color:#bf616a;">GitlogOptions </span><span>} </span><span style="color:#b48ead;">from </span><span>"</span><span style="color:#a3be8c;">gitlog</span><span>"
</span><span>
</span><span style="color:#b48ead;">const </span><span style="color:#bf616a;">options</span><span>: GitlogOptions = {
</span><span> repo: "</span><span style="color:#a3be8c;">.</span><span>",
</span><span> fields: ["</span><span style="color:#a3be8c;">subject</span><span>", "</span><span style="color:#a3be8c;">authorName</span><span>", "</span><span style="color:#a3be8c;">authorDate</span><span>"] </span><span style="color:#b48ead;">as const</span><span>,
</span><span> branch: "</span><span style="color:#a3be8c;">--follow</span><span>",
</span><span> file: "</span><span style="color:#a3be8c;">renamed-blog-post.md</span><span>",
</span><span>}
</span><span>
</span><span style="color:#8fa1b3;">gitlog</span><span>(</span><span style="color:#bf616a;">options</span><span>).</span><span style="color:#96b5b4;">forEach</span><span>(</span><span style="color:#bf616a;">entry </span><span style="color:#b48ead;">=> </span><span style="color:#ebcb8b;">console</span><span>.</span><span style="color:#96b5b4;">log</span><span>(</span><span style="color:#bf616a;">entry</span><span>))
</span></code></pre>
<p>The <code>branch: "--follow"</code> is a hack unfortunately - as a time of writing,
the <a href="https://www.npmjs.com/package/gitlog">gitlog</a> is version <strong>4.0.3</strong> and
does not support <code>follow</code> parameter directly. Infecting the
<a href="https://github.com/domharrington/node-gitlog/blob/cdda193e428bcde0f6c64163e73055d816792c98/src/index.ts#L278">code</a>
however reveals that <strong>branch</strong> allows to sneak in any text, not just
branch names, because there are no sanitization there.</p>
<ul>
<li>Running the script</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">npx</span><span> ts-node-dev server.ts
</span></code></pre>
<p>Produces desired results, the <code>authorDate</code> property is easy to parse</p>
<pre data-lang="js" style="background-color:#2b303b;color:#c0c5ce;" class="language-js "><code class="language-js" data-lang="js"><span>{
</span><span> status: [ '</span><span style="color:#a3be8c;">M</span><span>' ],
</span><span> files: [ '</span><span style="color:#a3be8c;">renamed-blog-post.md</span><span>' ],
</span><span> subject: '</span><span style="color:#a3be8c;">add additional sentence to to blog post</span><span>',
</span><span> authorName: '</span><span style="color:#a3be8c;">Peter Babič</span><span>',
</span><span> authorDate: '</span><span style="color:#a3be8c;">2020-12-03 22:07:31 +0100</span><span>'
</span><span>}
</span><span>{
</span><span> status: [ '</span><span style="color:#a3be8c;">M</span><span>' ],
</span><span> files: [ '</span><span style="color:#a3be8c;">renamed-blog-post.md</span><span>' ],
</span><span> subject: '</span><span style="color:#a3be8c;">insert a chapter into post</span><span>',
</span><span> authorName: '</span><span style="color:#a3be8c;">Peter Babič</span><span>',
</span><span> authorDate: '</span><span style="color:#a3be8c;">2020-12-03 21:51:19 +0100</span><span>'
</span><span>}
</span><span>{
</span><span> status: [ '</span><span style="color:#a3be8c;">R100</span><span>', '</span><span style="color:#a3be8c;">D</span><span>' ],
</span><span> files: [ '</span><span style="color:#a3be8c;">renamed-blog-post.md</span><span>', '</span><span style="color:#a3be8c;">blog-post.md</span><span>' ],
</span><span> subject: '</span><span style="color:#a3be8c;">rename blog post</span><span>',
</span><span> authorName: '</span><span style="color:#a3be8c;">Peter Babič</span><span>',
</span><span> authorDate: '</span><span style="color:#a3be8c;">2020-12-03 21:49:27 +0100</span><span>'
</span><span>}
</span><span>{
</span><span> status: [ '</span><span style="color:#a3be8c;">A</span><span>' ],
</span><span> files: [ '</span><span style="color:#a3be8c;">blog-post.md</span><span>' ],
</span><span> subject: '</span><span style="color:#a3be8c;">insert blog post file to track</span><span>',
</span><span> authorName: '</span><span style="color:#a3be8c;">Peter Babič</span><span>',
</span><span> authorDate: '</span><span style="color:#a3be8c;">2020-12-03 20:54:27 +0100</span><span>'
</span><span>}
</span></code></pre>
<p>The parsing steps could then include</p>
<ol>
<li>Reverse the entries</li>
<li>Entry with the <strong>A</strong> status hold date when the file was created (post
was published)</li>
<li>The last entry with the <strong>M</strong> status holds the date of the last edit</li>
<li>If the last entry holds the <strong>R</strong> status and its <em>score</em> is lower than
100, this means that file was renamed <strong>and</strong> edited, now loading the
last edit date</li>
</ol>
<p>The details about the <strong>score</strong> from the git diff
<a href="https://git-scm.com/docs/git-diff#_raw_output_format">documentation</a></p>
<blockquote>
<p>Status letters C and R are always followed by a score (denoting the
percentage of similarity between the source and target of the move or
copy). Status letter M may be followed by a score (denoting the
percentage of dissimilarity) for file rewrites.</p>
</blockquote>
<p>Done!</p>
<p>Sources are available in the
<a href="https://github.com/peterbabic/sources-peterbabic.dev/tree/master/following-renames-in-gitlog">repository</a></p>
Prevent push when skipping Cypress tests2020-12-03T00:00:00+00:002020-12-22T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/prevent-push-when-skipping-cypress-tests/<p><strong>There is a more updated method I use now descried in the
<a href="/blog/prevent-push-when-skipping-cypress-tests-pt-2/">part 2</a>.</strong></p>
<p>Here's way I use to prevent pushing changes when some of the Cypress tests
are being skipped</p>
<ul>
<li>Install required packages</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">npm</span><span> i</span><span style="color:#bf616a;"> -D</span><span> cypress husky start-server-and-test
</span></code></pre>
<ul>
<li>Define your <code>dev</code> script in <code>package.json</code> if not done by your
scaffolding</li>
</ul>
<pre data-lang="diff" style="background-color:#2b303b;color:#c0c5ce;" class="language-diff "><code class="language-diff" data-lang="diff"><span>...
</span><span>"scripts": {
</span><span style="color:#a3be8c;">+ "dev": "<start your server>"
</span><span>}
</span></code></pre>
<p>Test by running and make sure your server started</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">npm</span><span> run dev
</span></code></pre>
<ul>
<li>Create the file <code>cypress.json</code>, modify the port if needed</li>
</ul>
<pre data-lang="json" style="background-color:#2b303b;color:#c0c5ce;" class="language-json "><code class="language-json" data-lang="json"><span>{
</span><span> "</span><span style="color:#a3be8c;">baseUrl</span><span>": "</span><span style="color:#a3be8c;">http://localhost:3000/</span><span>"
</span><span>}
</span></code></pre>
<ul>
<li>Common Cypress run script</li>
</ul>
<pre data-lang="diff" style="background-color:#2b303b;color:#c0c5ce;" class="language-diff "><code class="language-diff" data-lang="diff"><span>...
</span><span>"scripts": {
</span><span> "dev": "<start your server>",
</span><span style="color:#a3be8c;">+ "cy:run": "cypress run"
</span><span>}
</span></code></pre>
<p>Running Cypress for
<a href="https://github.com/cypress-io/cypress/issues/619">the first time initializes it</a></p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>$ npm run cy:run
</span><span>Can't run because no spec files were found.
</span><span>We searched for any files inside of this folder:
</span><span>/project-path/cypress/integration
</span></code></pre>
<ul>
<li>Test script
<a href="https://docs.cypress.io/guides/guides/continuous-integration.html#Solutions">waits for <code>dev</code> and then does <code>cy:run</code></a></li>
</ul>
<pre data-lang="diff" style="background-color:#2b303b;color:#c0c5ce;" class="language-diff "><code class="language-diff" data-lang="diff"><span>...
</span><span>"scripts": {
</span><span> "dev": "<start your server>",
</span><span> "cy:run": "cypress run",
</span><span style="color:#a3be8c;">+ "test": "start-server-and-test dev http://localhost:3000 cy:run"
</span><span>}
</span></code></pre>
<p>A <code>start-test</code> is a shorter alias and
<a href="https://www.npmjs.com/package/start-server-and-test#alias">specifying port is sufficient</a></p>
<pre data-lang="diff" style="background-color:#2b303b;color:#c0c5ce;" class="language-diff "><code class="language-diff" data-lang="diff"><span>...
</span><span>"scripts": {
</span><span> "dev": "<start your server>",
</span><span> "cy:run": "cypress run",
</span><span style="color:#bf616a;">- "test": "start-server-and-test dev http://localhost:3000 cy:run"
</span><span style="color:#a3be8c;">+ "test": "start-test dev 3000 cy:run"
</span><span>}
</span></code></pre>
<ul>
<li>Create the simplest test in <code>cypress/integration/spec.js</code></li>
</ul>
<pre data-lang="js" style="background-color:#2b303b;color:#c0c5ce;" class="language-js "><code class="language-js" data-lang="js"><span style="color:#65737e;">/// <</span><span style="color:#bf616a;">reference </span><span style="color:#d08770;">types</span><span>="</span><span style="color:#a3be8c;">cypress</span><span>" </span><span style="color:#65737e;">/>
</span><span style="color:#8fa1b3;">describe</span><span>("</span><span style="color:#a3be8c;">Simplest test should</span><span>", () </span><span style="color:#b48ead;">=> </span><span>{
</span><span> </span><span style="color:#8fa1b3;">it</span><span>("</span><span style="color:#a3be8c;">visit base URL</span><span>", () </span><span style="color:#b48ead;">=> </span><span>{
</span><span> </span><span style="color:#bf616a;">cy</span><span>.</span><span style="color:#8fa1b3;">visit</span><span>("</span><span style="color:#a3be8c;">/</span><span>")
</span><span> })
</span><span>})
</span></code></pre>
<p>So far so good, just a quick sanity check</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">npm</span><span> run test
</span></code></pre>
<ul>
<li>Add a <code>pre-push</code>
<a href="https://github.com/typicode/husky/tree/master#install">hook into <code>package.json</code></a></li>
</ul>
<pre data-lang="diff" style="background-color:#2b303b;color:#c0c5ce;" class="language-diff "><code class="language-diff" data-lang="diff"><span>"scripts": {
</span><span> "dev": "<start your server>",
</span><span> "cy:run": "cypress run",
</span><span> "test": "start-test dev 3000 cy:run"
</span><span>},
</span><span>...
</span><span style="color:#a3be8c;">+"husky": {
</span><span style="color:#a3be8c;">+ "hooks": {
</span><span style="color:#a3be8c;">+ "pre-push": "npm run test"
</span><span style="color:#a3be8c;">+ }
</span><span>}
</span></code></pre>
<ul>
<li>Modify the <code>spec.js</code> to contain either <code>.skip</code> or <code>.only</code>
<a href="https://docs.cypress.io/guides/core-concepts/writing-and-organizing-tests.html#Excluding-and-Including-Tests">Mocha modifier</a></li>
</ul>
<pre data-lang="js" style="background-color:#2b303b;color:#c0c5ce;" class="language-js "><code class="language-js" data-lang="js"><span style="color:#65737e;">/// <</span><span style="color:#bf616a;">reference </span><span style="color:#d08770;">types</span><span>="</span><span style="color:#a3be8c;">cypress</span><span>" </span><span style="color:#65737e;">/>
</span><span style="color:#8fa1b3;">describe</span><span>("</span><span style="color:#a3be8c;">Simplest test should</span><span>", () </span><span style="color:#b48ead;">=> </span><span>{
</span><span> </span><span style="color:#bf616a;">it</span><span>.</span><span style="color:#8fa1b3;">only</span><span>("</span><span style="color:#a3be8c;">visit base URL</span><span>", () </span><span style="color:#b48ead;">=> </span><span>{
</span><span> </span><span style="color:#bf616a;">cy</span><span>.</span><span style="color:#8fa1b3;">visit</span><span>("</span><span style="color:#a3be8c;">/</span><span>")
</span><span> })
</span><span>})
</span></code></pre>
<ul>
<li>Lastly,
<a href="https://unix.stackexchange.com/a/433713/109352">skipped tests detection before the tests</a></li>
</ul>
<pre data-lang="diff" style="background-color:#2b303b;color:#c0c5ce;" class="language-diff "><code class="language-diff" data-lang="diff"><span>...
</span><span>"scripts": {
</span><span> "dev": "<start your server>",
</span><span> "cy:run": "cypress run",
</span><span> "test": "start-test dev 3000 cy:run"
</span><span>},
</span><span>"husky": {
</span><span> "hooks": {
</span><span style="color:#bf616a;">- "pre-push": "npm run test"
</span><span style="color:#a3be8c;">+ "pre-push": "grep -Rvzq -e '.skip' -e '.only' cypress/integration && npm run test"
</span><span> }
</span><span>}
</span></code></pre>
<p>Done!</p>
<p>Sources are available in the
<a href="https://github.com/peterbabic/sources-peterbabic.dev/tree/master/prevent-push-when-skipping-cypress-tests">repository</a></p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://github.com/cypress-io/cypress-skip-test">https://github.com/cypress-io/cypress-skip-test</a></li>
<li><a href="https://medium.com/swlh/react-js-adding-eslint-with-prettier-husky-git-hook-480ad39e65e9">https://medium.com/swlh/react-js-adding-eslint-with-prettier-husky-git-hook-480ad39e65e9</a></li>
<li><a href="https://github.com/lo1tuma/eslint-plugin-mocha/blob/master/docs/rules/no-skipped-tests.md">https://github.com/lo1tuma/eslint-plugin-mocha/blob/master/docs/rules/no-skipped-tests.md</a></li>
<li><a href="https://www.cypress.io/blog/2017/05/30/cypress-and-immutable-deploys/">https://www.cypress.io/blog/2017/05/30/cypress-and-immutable-deploys/</a></li>
<li><a href="https://github.com/cypress-io/eslint-plugin-cypress/issues/31#issuecomment-590037953">https://github.com/cypress-io/eslint-plugin-cypress/issues/31#issuecomment-590037953</a></li>
</ul>
Are OTP secrets stored in plaintext2020-11-12T00:00:00+00:002020-12-22T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/are-otp-secrets-stored-plaintext/<p>What happens with OTP secrets when a user database get leaked? Could the
attacker use them to gain your other sensitive information? How are they
even stored on the server?</p>
<h2 id="storing-password">Storing password</h2>
<p>One of the widely used method to log into some service day is still via
form of a password that is shared between you and the server and kept
secret. Fortunately, it is becoming a common knowledge that this approach
has some vulnerabilities. An attacker can get hold of your password,
because he can among other things guess it or record it as you type.</p>
<p>It is also hopefully widely known that the passwords should not be stored
in a plain text. When a database storing passwords that can be read easily
is exposed, the attacker can try the obtained passwords in different
services, gaining instant access if passwords are reused. Even if they are
not reused, reading a password can reveal a pattern used to create it and
consecutively help guess other passwords of the same user used in different
services.</p>
<p>To protect the passwords against exposure, they are mathematically garbled
in a way that cannot be ungarbled back, making sure that even the same
passwords by different users are garbled into a different form. These
processes are called hashing and salting respectively and the whole process
is guaranteed to be repeatable. Any password garbled this way, will for the
same user always result in the same form.</p>
<p>What happens when you type your password into the login field? It gets
transmitted to the server, hashed and salted and then it is compared
against the garble that is stored in a database. You are allowed access
when both garbles match precisely. The problem is, since passwords are
still delivered to the server in a way they can be read as a plain text,
the attacker controlling such server can try to log in to other services
with your reused password the moment you log in.</p>
<p>To mitigate this problem, one should never reuse passwords and use randomly
generated ones consisting of hard to remember characters and sequences,
that are also hard to type in properly. Service providers do not want to
force their users to use this very secure, but completely inconvenient
approach, because it would drive the business elsewhere. Also, server has
absolutely no way to grantee that the user used unique password he not used
on any other server.</p>
<h2 id="otp-or-one-time-password">OTP or one-time-password</h2>
<p>Solutions to protect the user using a password against an attacker are
available, but there is currently no such thing as a perfect security. A
general approach is adding a layers of security up to a point, when it is
still convenient enough to be used. One time passwords are commonly being
used as one of such layers. Broadly speaking, you do not only present
yourself with one password that you know, but also with another one that is
generated for you.</p>
<p>To make that generated password usable over and over, it has to be
different every time it is used, otherwise it is effectively just another
password. Furthermore, any password to be characterized as a one time
password, it has to be always rejected right after it was used the first
time. One consequence of such property is that if an attacker manages to
get hold of such password, but uses it after you, it is effectively useless
and he is out of luck this time.</p>
<p>Making sure the generated password is different every time it has to be
derived out of a varying starting information, also called seed value. For
a server to be able to verify this password, it has to access the same seed
value the password was generated from. Ignoring all other options again, an
Unix timestamp fits the description. Using Unix timestamp is convenient,
because it makes it easy to generate a short-lived one time password, which
self-destroys not only when first used but also after some short time since
first issued, regardless if used or not. This short time is generally less
than a minute and makes it really hard for the attacker to successfully use
such a volatile password, even when captured. This technique is called
Timed One Time Password, or TOTP.</p>
<p>Unfortunately, an Unix timestamp is a public information the attacker has
access to as well and cannot be used as a password as such. It has to be
mathematically combined with some secret, that the attacker does not know.
This secret is generated on the server and transferred to your device
during the initial setup, for instance via scanning a QR code with your
phone. This secret is called the OTP secret.</p>
<h2 id="storing-otp-secrets">Storing OTP secrets</h2>
<p>Now when we know that the OTP secret is combined with the timestamp to
generate a short lived TOTP token as it is also called, we could dig a
little deeper. We also know that the passwords are stored in a way that
does not reveal anything when exposed. But what about the OTP secrets?
Could they be garbled the same way the passwords are? I was genuinely
interested in finding an answers to this question.</p>
<p>By searching the Internet I have concluded that unfortunately this
<a href="https://stackoverflow.com/questions/46055146/should-2fa-secret-codes-be-hashed-for-storage">would not work</a>,
also discussed
<a href="https://stackoverflow.com/questions/15962195/is-it-possible-to-salt-and-or-hash-hotp-totp-secret-on-the-server">here </a>,
because the server has to calculate the TOTP token from the secret and
compare that with the token provided by the user, not just start with
comparing straight ahead. For this to work, it has to be stored in a way
that would allow to obtain the original secret.</p>
<p>The simplest way to fulfill this condition is to store the OTP secrets as a
plain text. Imagine the scenario with the attacker getting access the
database data again, but now not just with unreadable passwords but also
with a very readable OTP secrets. How does the situation change? Well, he
can now generate the TOTP tokens that would appear as if they were from you
at will, effectively stealing the device you used to generate your tokens
(something you own). He still cannot log in to the service impersonating
you, because he cannot retrieve the password (something you know).</p>
<p>The combination of something you know with something you own is the base of
a Multi Factor Authentication, or MFA for short. This technique describes
precisely two factors and is usually referred to a Two Factor
Authentication, or 2FA.</p>
<h2 id="encryption">Encryption</h2>
<p>The obvious way to protect the OTP secrets is
<a href="https://security.stackexchange.com/questions/42795/storing-seed-for-totp">to encrypt them</a>
, also discussed
<a href="https://security.stackexchange.com/questions/125119/totp-storing-symmetrical-secrets">here</a>.
This however works only when the general secret used to decrypt the OTP
secrets of all users was not exposed with the database itself. Not ideal,
but better than nothing.</p>
<p>Searching the Internet more, it became more clear to me that I am
definitely not the only one thinking about protecting OTP secrets on the
server, discussed
<a href="https://stackoverflow.com/questions/14271136/store-secret-key-for-totp">here</a>,
<a href="https://1password.community/discussion/101004/are-totp-secrets-stored-in-plaintext">here</a>
and
<a href="https://security.stackexchange.com/questions/52499/are-there-any-secure-ways-to-store-the-secret-key-used-in-a-totp-scheme">here</a>.
Well it turns out that there is one solution.</p>
<p>Decrypting the OTP secret with a password provided by the user. Now, the
attacker need the plan text password to get hold of a plain text OTP secret
to generate his own TOTP tokens. You can pause for a moment at this point
and try to think what is the downside of this approach.</p>
<h2 id="password-managers">Password managers</h2>
<p>Before I confirm you answer let me talk a little bit about the password
managers. A password manager is an app that protects all your passwords
with just a single one, called a master password. Your protected passwords
can be very strong and completely unique and the only way to them is to
break your master password. If a strong one is used, it could be quite
hard.</p>
<p>Lately, a password managers started offering to generate the TOTP token as
well. Many argue, that this boils down to a Single Factor Authentication.
Breaking into the database where your passwords and your OTP secrets are
stored is all the attacker needs to get into any of your services stored
there. From the user's perspective, this is very convenient, yet it leads
to a false sense of security. To make it clear, passwords and OTP secrets
should be stored in a database protected by a two different means to be
more secure.</p>
<p>To be honest, the concerns about storing the OTP secrets in a password
manager along with the actual passwords led me to the discovery that OTP
secrets could be exposed from either the user or the server. The fact
itself to me is quite disturbing.</p>
<p>Yet as I said a few moments back, side on the server can be successfully
decrypted with the password provided by the user. But there is a big but.
If the user asks to send the new password, the server has to either
deactivate the 2FA or instructs the user to reinitialize the OTP generation
process
<a href="https://security.stackexchange.com/questions/181184/storing-totp-secret-in-database-plaintext-or-encrypted#comment351922_181184">every time the password changes</a>.
I am not really sure this implemented on any service though.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Managing security is quite a hassle. Sometimes I wish a I never started and
just used some simple weak password everywhere, or even no password at all.
But everyone should do his own research and conclude what level of privacy
he wants to keep.</p>
<p>Weak and even weak reused passwords can be protected well enough for a
regular user by enabling 2FA. Using strong passwords with a password
manager is another approach. Combining both worlds brings the most
security. Using the tempting way of protecting the OTP codes with the same
master password that protects your other password is considered a bad
practice. It decreases convenience while at the same time not providing the
benefits the true second factor provides. Or does it? And will passwordless
come soon enough that all this questions would be rendered obsolete?</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://github.com/browserpass/browserpass-legacy/issues/322#issuecomment-483373017">https://github.com/browserpass/browserpass-legacy/issues/322#issuecomment-483373017</a></li>
<li><a href="https://www.freecodecamp.org/news/how-time-based-one-time-passwords-work-and-why-you-should-use-them-in-your-app-fdd2b9ed43c3/">https://www.freecodecamp.org/news/how-time-based-one-time-passwords-work-and-why-you-should-use-them-in-your-app-fdd2b9ed43c3/</a></li>
<li><a href="https://blog.securityevaluators.com/psa-dont-store-2fa-codes-in-password-managers-77d92608b062">https://blog.securityevaluators.com/psa-dont-store-2fa-codes-in-password-managers-77d92608b062</a></li>
<li><a href="https://safecontrols.blog/2019/02/25/storing-seeds-for-multifactor-authentication-tokens/">https://safecontrols.blog/2019/02/25/storing-seeds-for-multifactor-authentication-tokens/</a></li>
<li><a href="https://medium.com/@stuartschechter/before-you-turn-on-two-factor-authentication-27148cc5b9a1">https://medium.com/@stuartschechter/before-you-turn-on-two-factor-authentication-27148cc5b9a1</a></li>
<li><a href="https://www.reddit.com/r/security/comments/8mi5fe/is_it_a_bad_idea_to_store_totp_information_in/">https://www.reddit.com/r/security/comments/8mi5fe/is_it_a_bad_idea_to_store_totp_information_in/</a></li>
<li><a href="https://www.reddit.com/r/KeePass/comments/ff2rdf/keepass_otp/">https://www.reddit.com/r/KeePass/comments/ff2rdf/keepass_otp/</a></li>
<li><a href="https://blog.paranoidpenguin.net/2020/05/how-to-back-up-your-2fa-secret-keys-with-keepassxc/">https://blog.paranoidpenguin.net/2020/05/how-to-back-up-your-2fa-secret-keys-with-keepassxc/</a></li>
</ul>
Sync Keepass passwords between your computer and phone2020-11-09T00:00:00+00:002020-12-22T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/sync-keepass-passwords-between-computer-phone/<p>The release of the KeepassDX v2.9 brings
<a href="https://github.com/Kunzisoft/KeePassDX/projects/20#card-46542303">working autofill</a>
in Chrome based browsers in addition to Firefox based ones, here's how to
use it on the Android phone with passwords stored on your Arch linux
computer.</p>
<ul>
<li>Start by installing required apps on the phone, for instance via F-Droid</li>
</ul>
<ol>
<li><a href="https://f-droid.org/en/packages/com.kunzisoft.keepass.libre/">KeepassDX</a></li>
<li><a href="https://f-droid.org/en/packages/com.nutomic.syncthingandroid/">Syncthing</a></li>
<li><a href="https://f-droid.org/en/packages/com.google.zxing.client.android/">Barcode scanner</a></li>
</ol>
<h2 id="keepassxc">KeepassXC</h2>
<ul>
<li>At the computer, the folder <code>~/Sync</code> is the default for Syncthing, we
will use it</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">mkdir ~</span><span>/Sync
</span></code></pre>
<ul>
<li>Install KeepassXC if not already present</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -S</span><span> keepassxc
</span></code></pre>
<p>Create the database, choose a sufficiently strong master password that you
won't forget</p>
<ul>
<li>Place the KeepassXC database file to the sync folder</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">mv</span><span> /path/to/your/Passwords.kdbx </span><span style="color:#bf616a;">~</span><span>/Sync/Passwords.kdbx
</span></code></pre>
<p>Make sure the database contains at least one recent entry to verify the
sync functionality</p>
<h2 id="syncthing">Syncthing</h2>
<ul>
<li>Install Syncthing on the computer</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -S</span><span> syncthing
</span></code></pre>
<p>To make it run automatically, change USERNAME for your user</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> systemctl enable syncthing@USERNAME.service</span><span style="color:#bf616a;"> --now
</span></code></pre>
<ul>
<li>When syncthing is started access the web GUI</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">xdg-open</span><span> http://localhost:8384/
</span></code></pre>
<p>Make sure the path is set to <code>~/Sync</code> under <strong>Folders</strong></p>
<blockquote>
<p><strong>Folder Path</strong> <code>/home/USERNAME/Sync</code></p>
</blockquote>
<ul>
<li>Display the QR code for pairing in the web GUI</li>
</ul>
<blockquote>
<p><strong>Actions</strong> > Show ID</p>
</blockquote>
<p>If you want autonomous synchronization, enable it as a service</p>
<blockquote>
<p><strong>Hamburger</strong> > <strong>Settings</strong> > <strong>Behaviour</strong> > <strong>Start service
automatically on boot</strong></p>
</blockquote>
<ul>
<li>Pair the devices, optionally fill in the device label before saving</li>
</ul>
<blockquote>
<p><strong>Devices</strong> > <strong>PLUS icon</strong> > <strong>QR icon</strong> > aim camera at computer screen</p>
</blockquote>
<ul>
<li>
<p>Back at the computer, close QR code dialog and wait for a device
announcement, click <strong>Accept</strong></p>
</li>
<li>
<p>Under <strong>Sharing</strong> tab, check the Sync folder and click <strong>Save</strong></p>
</li>
<li>
<p>Back at the phone, wait for the folder annoucement, then click <strong>Accept</strong></p>
</li>
<li>
<p>Set the path to some newly created folder where you can navigate, i.e.
<code>/storage/emulated/0/Sync</code></p>
</li>
</ul>
<h2 id="keepassdx">KeepassDX</h2>
<ul>
<li>When the files are synchronized, find the database in the KeepassDX in
the folder from the previous step</li>
</ul>
<blockquote>
<p><strong>Open existing database</strong> > <code>/storate/emulated/0/Sync/Passwords.kdbx</code></p>
</blockquote>
<ul>
<li>
<p>Open the database with your chosen Master password</p>
</li>
<li>
<p>Enable Magikeyboard</p>
</li>
</ul>
<blockquote>
<p><strong>...</strong> > <strong>Settings</strong> > <strong>Form filling</strong> > <strong>Device keyboard
settings</strong> > <strong>Magikeyboard (KeepasDX)</strong></p>
</blockquote>
<ul>
<li>Enable Autofill service</li>
</ul>
<blockquote>
<p><strong>...</strong> > <strong>Settings</strong> > <strong>Form filling</strong> > <strong>Set default autofill
service</strong> > <strong>KeepasDX form autofilling</strong></p>
</blockquote>
<p>Done!</p>
<p>You now have bi-directional cross-browser, cross-device password database
syncrhonization active what does not stand in your way</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://www.reddit.com/r/KeePass/comments/hu17rh/what_is_the_best_way_to_sync_a_keepass_database/">https://www.reddit.com/r/KeePass/comments/hu17rh/what_is_the_best_way_to_sync_a_keepass_database/</a></li>
<li><a href="https://wiki.archlinux.org/index.php/Syncthing#Autostarting_Syncthing">https://wiki.archlinux.org/index.php/Syncthing#Autostarting_Syncthing</a></li>
<li><a href="https://github.com/syncthing/syncthing">https://github.com/syncthing/syncthing</a></li>
<li><a href="https://github.com/keepassxreboot/keepassxc">https://github.com/keepassxreboot/keepassxc</a></li>
<li><a href="https://github.com/Kunzisoft/KeePassDX">https://github.com/Kunzisoft/KeePassDX</a></li>
<li><a href="https://news.ycombinator.com/item?id=24928088">https://news.ycombinator.com/item?id=24928088</a></li>
</ul>
Why I use losetup instead of udisksctl2020-10-28T00:00:00+00:002020-12-22T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/why-use-losetup-instead-udisksctl/<p>Accessing raw filesystem image partitions without the need to specify the
<strong>offset</strong> and <strong>size</strong> manually</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#65737e;"># udisksctl
</span><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -S</span><span> udisks2
</span><span style="color:#bf616a;">man</span><span> 1 udisksctl
</span><span>
</span><span style="color:#65737e;"># losetup
</span><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -S</span><span> util-linux
</span><span style="color:#bf616a;">man</span><span> 8 losetup
</span></code></pre>
<h3 id="udisksctl">udisksctl</h3>
<p>Root permissions are <strong>not</strong> required</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">udisksctl</span><span> loop-setup</span><span style="color:#bf616a;"> --read-only --file </span><span><raw-image>.img
</span><span style="color:#bf616a;">udisksctl</span><span> loop-setup</span><span style="color:#bf616a;"> -rf </span><span><raw-file>.img
</span></code></pre>
<blockquote>
<p>Mapped file raw-image.img as /dev/loop0.</p>
</blockquote>
<p>Image file location is specified with an <strong>option</strong></p>
<ul>
<li><code>loop-setup</code> is an argument, because udisksctl has other competences as
well</li>
<li><code>--read-only</code> or <code>-r</code> prevents accidental damage, can be omitted</li>
<li><code>--file</code> or <code>-f</code> takes a raw filesystem image file location</li>
</ul>
<p>A first unused partitioned loop device file location the image was
associated with is printed <strong>automatically</strong></p>
<h3 id="losetup">losetup</h3>
<p>Requires root permissions</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> losetup</span><span style="color:#bf616a;"> --show --partscan --read-only --find </span><span><raw-image>.img
</span><span style="color:#bf616a;">sudo</span><span> losetup</span><span style="color:#bf616a;"> --show -Prf </span><span><raw-image>.img
</span></code></pre>
<blockquote>
<p>/dev/loop0</p>
</blockquote>
<p>Raw image file is specified with an <strong>argument</strong></p>
<ul>
<li><code>--show</code> print the loop device, used with <code>-f</code></li>
<li><code>--partscan</code> or <code>-P</code> creates a partitioned loop device</li>
<li><code>--read-only</code> or <code>-r</code> prevents accidental damage, can be omitted</li>
<li><code>--find</code> or <code>-f</code> find first unused device</li>
</ul>
<p>Control is more granular as most actions are not invoked automatically, but
rather <strong>explicitly</strong></p>
<h2 id="probing-and-listing">Probing and listing</h2>
<h3 id="udisksctl-1">udisksctl</h3>
<p>Provide information about a specific loop device</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">udisksctl</span><span> info</span><span style="color:#bf616a;"> -b</span><span> /dev/loop0
</span></code></pre>
<ul>
<li><code>--block-device</code> or <code>-b</code> specifies the loop device to probe</li>
</ul>
<p>A an uninspiring output snippet, truncated</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>/org/freedesktop/UDisks2/block_devices/loop0:
</span><span> org.freedesktop.UDisks2.Block:
</span><span> ...
</span><span> PreferredDevice: /dev/loop0
</span><span> ReadOnly: false
</span><span> Size: 1845493760
</span><span> Symlinks:
</span><span> UserspaceMountOptions:
</span><span> org.freedesktop.UDisks2.Loop:
</span><span> Autoclear: false
</span><span> BackingFile: raw-image.img
</span><span> SetupByUID: 0
</span><span> org.freedesktop.UDisks2.PartitionTable:
</span><span> Partitions: /org/freedesktop/UDisks2/block_devices/loop0p1
</span><span> /org/freedesktop/UDisks2/block_devices/loop0p2
</span><span> Type: dos
</span></code></pre>
<p><strong>Caution:</strong> may make you dizzy</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">udisksctl</span><span> dump
</span></code></pre>
<h3 id="losetup-1">losetup</h3>
<p>Provide information about a specific loop device</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">losetup -l</span><span> /dev/loop0
</span></code></pre>
<ul>
<li><code>--list</code> or <code>-l</code> lists <strong>info</strong> about a <strong>specified</strong> loop device</li>
</ul>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE DIO LOG-SEC
</span><span>/dev/loop0 0 0 0 1 image-A.img 0 512
</span></code></pre>
<p>Provide information about all used loop devices</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">losetup -l
</span></code></pre>
<ul>
<li><code>--list</code> or <code>-l</code> lists <strong>info</strong> about <strong>all used</strong> loop devices without
speficiyng one</li>
</ul>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE DIO LOG-SEC
</span><span>/dev/loop0 0 0 0 1 image-A.img 0 512
</span><span>/dev/loop1 0 0 1 0 image-B.img 0 512
</span></code></pre>
<ul>
<li><code>--json</code> or <code>-J</code> formats the output as JSON, used with <code>-l</code></li>
</ul>
<pre data-lang="json" style="background-color:#2b303b;color:#c0c5ce;" class="language-json "><code class="language-json" data-lang="json"><span>{
</span><span> "</span><span style="color:#a3be8c;">loopdevices</span><span>": [
</span><span> {
</span><span> "</span><span style="color:#a3be8c;">name</span><span>": "</span><span style="color:#a3be8c;">/dev/loop0</span><span>",
</span><span> "</span><span style="color:#a3be8c;">sizelimit</span><span>": </span><span style="color:#d08770;">0</span><span>,
</span><span> "</span><span style="color:#a3be8c;">offset</span><span>": </span><span style="color:#d08770;">0</span><span>,
</span><span> "</span><span style="color:#a3be8c;">autoclear</span><span>": </span><span style="color:#d08770;">false</span><span>,
</span><span> "</span><span style="color:#a3be8c;">ro</span><span>": </span><span style="color:#d08770;">false</span><span>,
</span><span> "</span><span style="color:#a3be8c;">back-file</span><span>": "</span><span style="color:#a3be8c;">raw-image.img</span><span>",
</span><span> "</span><span style="color:#a3be8c;">dio</span><span>": </span><span style="color:#d08770;">false</span><span>,
</span><span> "</span><span style="color:#a3be8c;">log-sec</span><span>": </span><span style="color:#d08770;">512
</span><span> }
</span><span> ]
</span><span>}
</span></code></pre>
<ul>
<li><code>--raw</code> outputs the data without extra whitespaces</li>
</ul>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>NAME SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE DIO LOG-SEC
</span><span>/dev/loop0 0 0 1 0 image-A.img 0 512
</span><span>/dev/loop1 0 1 0 0 image-B.img 0 512
</span></code></pre>
<ul>
<li><code>--all</code> or <code>-a</code> lists all used loop devices (but why?)</li>
</ul>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>/dev/loop0: []: (image-A.img)
</span><span>/dev/loop1: []: (image-B.img)
</span></code></pre>
<h2 id="removal">Removal</h2>
<p>Sets the <strong>autoclear</strong> flag - the device will be released instantly
<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1076166">when not needed</a>, or
i.e.
<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1584079">after unmounting </a></p>
<h3 id="udisksctl-2">udisksctl</h3>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">udisksctl</span><span> loop-delete</span><span style="color:#bf616a;"> -b</span><span> /dev/loop0
</span></code></pre>
<p>The device is specified with an <strong>option</strong></p>
<ul>
<li><code>loop-delete</code> is an argument for a sub-command</li>
<li><code>--block-device</code> or <code>-b</code> specifies the loop device to flag</li>
</ul>
<h3 id="losetup-2">losetup</h3>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> losetup</span><span style="color:#bf616a;"> -d</span><span> /dev/loop0
</span></code></pre>
<ul>
<li><code>--detach</code> or <code>-d</code> specifies the loop device to flag</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> losetup</span><span style="color:#bf616a;"> -D
</span></code></pre>
<ul>
<li><code>--detach-all</code> or <code>-D</code> detaches all used devices</li>
</ul>
<h2 id="conclusion">Conclusion</h2>
<p>After trying both udisksctl and losetup for handling loop devices to
manipulate the raw filesystem images, I personally like losetup much more.
It is a tool spcialised exactly for this job and mostly nothing else,
following the UNIX philosophy. Udisksctl was designed for a little
different role and covers broader use cases, one of which is also handling
loop devices.</p>
<p>I did find only single legitimate reason to use udisksctl for this task -
it was already installed on my system, because i.e. mintstick and
gnome-control-center depend on it.</p>
<p>If I did want to keep my system lean, I would tried to use packages already
installed to do the job, intead of installing new ones. But hey, the
packages that have brought udisks2 in (depend on it) are already rather
large UI tools, so speaking about efficiency here is not really cutting it.</p>
<p>Both mentioned tools can do the job pretty well and they differ only
slightly in the syntax. Where they differ mostly is how they output the
data. Losetup provides multiple output formatting options. It is even
including JSON which definitely feels much more modern.</p>
<p>Building a Dockerfile with losetup's raw output definitely feels faster
when I do not need to spend too much time parsing complex text outputs and
rather spend time building the actual thing. Altought, I could not think of
any example using a web technology in conjunction with losetup to utilize
JSON output formatting,</p>
<p>Please let me know about any real use cases of using JSON output here, I am
genuinely interested. Maybe it could be a glipse to another development
perspective.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://en.wikipedia.org/wiki/Loop_device">https://en.wikipedia.org/wiki/Loop_device</a></li>
<li><a href="https://wiki.archlinux.org/index.php/udisks#Mount_loop_devices">https://wiki.archlinux.org/index.php/udisks#Mount_loop_devices</a></li>
<li><a href="https://github.com/karelzak/util-linux">https://github.com/karelzak/util-linux</a></li>
<li><a href="https://github.com/storaged-project/udisks">https://github.com/storaged-project/udisks</a></li>
<li><a href="https://stackoverflow.com/questions/5881134/cannot-delete-device-dev-loop0">https://stackoverflow.com/questions/5881134/cannot-delete-device-dev-loop0</a></li>
<li><a href="https://unix.stackexchange.com/questions/520286/why-is-udisksctl-loop-setup-so-slow">https://unix.stackexchange.com/questions/520286/why-is-udisksctl-loop-setup-so-slow</a></li>
</ul>
Cross package Node app for ARM using QEMU and Docker2020-10-26T00:00:00+00:002020-12-22T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/cross-package-node-app-arm-qemu-docker/<ul>
<li>Download <code>2020-08-20-raspios-buster-armhf-lite.zip</code> from the
<a href="http://downloads.raspberrypi.org/raspios_lite_armhf/images/raspios_lite_armhf-2020-08-24/">official site</a></li>
<li>Install required tools</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -S</span><span> unzip util-linux docker
</span></code></pre>
<ul>
<li>Start Docker service</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> systemctl start docker.service
</span></code></pre>
<p>Add yourself into the <code>docker</code> group, otherwise permissions are needed</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> usermod</span><span style="color:#bf616a;"> -aG</span><span> docker $(</span><span style="color:#bf616a;">whoami</span><span>)
</span><span style="color:#bf616a;">su</span><span> - $(</span><span style="color:#bf616a;">whoami</span><span>)
</span></code></pre>
<h2 id="qemu-setup">QEMU setup</h2>
<ul>
<li>Allow your computer to
<a href="https://wiki.archlinux.org/index.php/QEMU#Chrooting_into_arm/arm64_environment_from_x86_64">emulate ARM binaries permanently</a>:</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">yay -S</span><span> binfmt-qemu-static qemu-user-static
</span></code></pre>
<p>Verify emulation setup</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">grep</span><span> enabled /proc/sys/fs/binfmt_misc/qemu-arm
</span><span style="color:#65737e;"># enabled
</span></code></pre>
<p>Alternatively,
<a href="https://www.docker.com/blog/getting-started-with-docker-for-arm-on-linux/">a temporary solution</a>
for most distributions:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">docker</span><span> run</span><span style="color:#bf616a;"> --rm --privileged</span><span> docker/binfmt:820fdd95a9972a5308930a2bdfb8573dd4447ad3
</span></code></pre>
<p>Run privileged containers
<a href="https://www.trendmicro.com/en_us/research/19/l/why-running-a-privileged-container-in-docker-is-a-bad-idea.html">with <strong>caution</strong></a>,
at least
<a href="https://hub.docker.com/layers/docker/binfmt/820fdd95a9972a5308930a2bdfb8573dd4447ad3/images/sha256-4ed4ace8a54292345304ea270979ee6511e2465722ceeda373b17c4df1ebe658?context=explore">peek into the container's layers</a>
before running</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">yay -s</span><span> dive
</span><span style="color:#bf616a;">dive</span><span> docker/binfmt:820fdd95a9972a5308930a2bdfb8573dd4447ad3
</span></code></pre>
<p>The tool displays some details about the files it is concerned with, a
narrow layout</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>─────── ┃ ● Current Layer Contents ┣━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
</span><span>Layer Permission UID:GID Size Filetree
</span><span> 0 drwxr-xr-x 0:0 1.1 kB ├── etc
</span><span> 1 drwxr-xr-x 0:0 1.1 kB │ └── binfmt.d
</span><span> 2 -rw-rw-r-- 0:0 1.1 kB │ └── 00_linuxkit.conf
</span><span> drwxr-xr-x 0:0 17 MB └── usr
</span><span> drwxr-xr-x 0:0 17 MB └── bin
</span><span> -rwxr-xr-x 0:0 2.2 MB ├── binfmt
</span><span> -rwxr-xr-x 0:0 4.1 MB ├── qemu-aarch64
</span><span> -rwxr-xr-x 0:0 3.6 MB ├── qemu-arm
</span><span> -rwxr-xr-x 0:0 3.9 MB ├── qemu-ppc64le
</span><span> -rwxr-xr-x 0:0 3.2 MB └── qemu-s390x
</span></code></pre>
<h2 id="mount">Mount</h2>
<ul>
<li>Extract the downloaded image from the archive</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">unzip </span><span><raspios-image>.zip
</span></code></pre>
<ul>
<li>Associate the image with a loop device</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> losetup</span><span style="color:#bf616a;"> --read-only --show -fP </span><span><raspios-image>.img
</span><span style="color:#65737e;"># /dev/loop0
</span></code></pre>
<p><a href="https://raspberrypi.stackexchange.com/a/109524/59436">Inspect image partitions</a>
if needed</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">lsblk -o</span><span> name,label /dev/loop0
</span></code></pre>
<p>Look for the rootfs label</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>NAME LABEL
</span><span>loop0
</span><span>├─loop0p1 boot
</span><span>└─loop0p2 rootfs
</span></code></pre>
<ul>
<li>Mount the root filesystem partition, if not done automatically by your
distribution</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> mkdir /tmp/raspios
</span><span style="color:#bf616a;">sudo</span><span> mount</span><span style="color:#bf616a;"> -o</span><span> ro /dev/loop0p2 /tmp/raspios
</span></code></pre>
<h2 id="import">Import</h2>
<ul>
<li>Create a Docker image from Raspios root filesystem</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> tar c</span><span style="color:#bf616a;"> -C</span><span> /tmp/raspios . | </span><span style="color:#bf616a;">docker</span><span> image import - raspios-lite-armhf:buster
</span></code></pre>
<ul>
<li>Create and run a Docker container from the image</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">docker</span><span> run</span><span style="color:#bf616a;"> -it --name</span><span> raspios_bare raspios-lite-armhf:buster /bin/bash
</span></code></pre>
<p>Enter the container again in case of an accidental exit</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">docker</span><span> start</span><span style="color:#bf616a;"> -ai</span><span> raspios_bare
</span></code></pre>
<ul>
<li>Clean</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> umount /tmp/raspios
</span><span style="color:#bf616a;">sudo</span><span> losetup</span><span style="color:#bf616a;"> -d</span><span> /dev/loop0
</span></code></pre>
<h2 id="container-image-manipulation">Container image manipulation</h2>
<ul>
<li>When inside,
<a href="https://computingforgeeks.com/install-node-js-14-on-ubuntu-debian-linux-mint/">install Node 14</a>
and pkg from vercel globally</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">wget -qO-</span><span> https://deb.nodesource.com/setup_14.x | </span><span style="color:#bf616a;">bash</span><span> -
</span><span style="color:#bf616a;">apt</span><span> install nodejs
</span><span style="color:#bf616a;">npm</span><span> i</span><span style="color:#bf616a;"> -g</span><span> pkg
</span></code></pre>
<ul>
<li>Fetch a pre-built binary of Node for armhf from a
<a href="https://github.com/yao-pkg/pkg-binaries/releases/tag/v1.0.0">repository</a></li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">wget</span><span> https://github.com/yao-pkg/pkg-binaries/releases/download/v1.0.0/fetched-v14.4.0-linux-armv6</span><span style="color:#bf616a;"> -P</span><span> /root/.pkg-cache/v2.6/
</span></code></pre>
<ul>
<li>Exit the container by pressing <code>Ctrl-D</code> or typing the <code>exit</code> command</li>
<li>Commit the changes to the image
<a href="https://phoenixnap.com/kb/how-to-commit-changes-to-docker-image">for a reuse</a></li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">docker</span><span> commit raspios_bare raspios_node_pkg
</span></code></pre>
<p>You can safely remove the bare container now</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">docker</span><span> rm raspios_bare
</span></code></pre>
<h2 id="packaging">Packaging</h2>
<ul>
<li>Create a sample script for packaging</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#96b5b4;">echo </span><span>'</span><span style="color:#a3be8c;">console.log("Hello World")</span><span>' > index.js
</span></code></pre>
<ul>
<li>Create a temporary container, mount a current folder as <code>/build</code> in it
and package it for linux</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">docker</span><span> run</span><span style="color:#bf616a;"> --rm -v </span><span>$</span><span style="color:#bf616a;">PWD</span><span>:/build raspios_node_pkg pkg</span><span style="color:#bf616a;"> -t</span><span> linux</span><span style="color:#bf616a;"> --out-dir</span><span> /build /build/index.js
</span></code></pre>
<ul>
<li>An armhf Node executable named <code>index</code> is created in a current directory</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">file</span><span> index
</span><span style="color:#65737e;"># index: ELF 32-bit LSB pie executable, ARM, EABI5 version 1 (GNU/Linux), ...
</span></code></pre>
<p>Make an executable script to streamline process and run i.e. like
<code>armpkg index.js</code></p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#65737e;">#!/bin/bash
</span><span style="color:#bf616a;">docker</span><span> run</span><span style="color:#bf616a;"> --rm -v </span><span>$</span><span style="color:#bf616a;">PWD</span><span>:/build raspios_node_pkg pkg</span><span style="color:#bf616a;"> -t</span><span> linux</span><span style="color:#bf616a;"> --out-dir</span><span> /build "</span><span style="color:#a3be8c;">/build/</span><span>$</span><span style="color:#bf616a;">1</span><span>"
</span></code></pre>
<p>Done!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://github.com/vercel/pkg-fetch/releases">https://github.com/vercel/pkg-fetch/releases</a></li>
<li><a href="https://github.com/lukechilds/dockerpi">https://github.com/lukechilds/dockerpi</a></li>
<li><a href="http://kmdouglass.github.io/posts/how-i-built-a-cross-compilation-workflow-for-the-raspberry-pi/">http://kmdouglass.github.io/posts/how-i-built-a-cross-compilation-workflow-for-the-raspberry-pi/</a></li>
<li><a href="http://modernhackers.com/virtualize-raspberry-pi-3-s-to-run-docker-swarm-cluster-on-it/">http://modernhackers.com/virtualize-raspberry-pi-3-s-to-run-docker-swarm-cluster-on-it/</a></li>
<li><a href="https://docs.docker.com/storage/volumes/#start-a-container-with-a-volume">https://docs.docker.com/storage/volumes/#start-a-container-with-a-volume</a></li>
<li><a href="https://hub.docker.com/r/docker/binfmt/tags">https://hub.docker.com/r/docker/binfmt/tags</a></li>
</ul>
How to emulate Raspios natively in QEMU2020-10-26T00:00:00+00:002020-12-22T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/how-emulate-raspios-natively-qemu/<ul>
<li>Download <code>2020-08-20-raspios-buster-armhf-lite.zip</code> from the
<a href="http://downloads.raspberrypi.org/raspios_lite_armhf/images/raspios_lite_armhf-2020-08-24/">official site</a></li>
<li>Install required tools</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -S</span><span> unzip util-linux qemu qemu-arch-extra
</span></code></pre>
<p>Minimal required QEMU version is 5.1</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">qemu-system-aarch64 --version
</span><span style="color:#65737e;"># QEMU emulator version 5.1.0
</span></code></pre>
<p>Ethernet is
<a href="https://raspberrypi.stackexchange.com/q/45130/59436">shared with USB controller on Raspberry Pi 3</a>,
but the <a href="https://wiki.qemu.org/ChangeLog/5.1#Arm">changelog</a> for QEMU 5.1
states:</p>
<blockquote>
<p>The Raspberry Pi boards now support the USB controller.</p>
</blockquote>
<ul>
<li><a href="https://raspberrypi.stackexchange.com/a/53993/59436">Associate the image with a loop device</a></li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">unzip</span><span> 2020-08-20-raspios-buster-armhf-lite.zip
</span><span style="color:#bf616a;">sudo</span><span> losetup</span><span style="color:#bf616a;"> --show -fP</span><span> 2020-08-20-raspios-buster-armhf-lite.img
</span><span style="color:#65737e;"># i.e. /dev/loop0
</span></code></pre>
<ul>
<li><a href="https://wiki.archlinux.org/index.php/QEMU#With_loop_module_autodetecting_partitions">Copy required files over</a></li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> mkdir /mnt/raspios
</span><span style="color:#bf616a;">sudo</span><span> mount /dev/loop0p1 /mnt/raspios
</span><span style="color:#bf616a;">cp</span><span> /mnt/raspios/kernel8.img /mnt/raspios/bcm2710-rpi-3-b.dtb .
</span><span style="color:#bf616a;">sudo</span><span> umount /mnt/raspios
</span><span style="color:#bf616a;">sudo</span><span> losetup</span><span style="color:#bf616a;"> -d</span><span> /dev/loop0
</span></code></pre>
<h2 id="run-with-qemu">Run with QEMU</h2>
<ul>
<li>Resize the raw image (2, 4, 8, 16 ... GB)</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">qemu-img</span><span> resize 2020-08-20-raspios-buster-armhf-lite.img 4GB
</span></code></pre>
<ul>
<li>Run the image</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> qemu-system-aarch64 \
</span><span style="color:#bf616a;"> -m</span><span> 1024 \
</span><span style="color:#bf616a;"> -M</span><span> raspi3 \
</span><span style="color:#bf616a;"> -kernel</span><span> kernel8.img \
</span><span style="color:#bf616a;"> -dtb</span><span> bcm2710-rpi-3-b.dtb \
</span><span style="color:#bf616a;"> -sd</span><span> 2020-08-20-raspios-buster-armhf-lite.img \
</span><span style="color:#bf616a;"> -append </span><span>"</span><span style="color:#a3be8c;">console=ttyAMA0 root=/dev/mmcblk0p2 rw rootwait rootfstype=ext4</span><span>" \
</span><span style="color:#bf616a;"> -nographic </span><span>\
</span><span style="color:#bf616a;"> -device</span><span> usb-net,netdev=net0 \
</span><span style="color:#bf616a;"> -netdev</span><span> user,id=net0,hostfwd=tcp::2222-:22
</span></code></pre>
<p>The guest is
<a href="https://stackoverflow.com/a/64420363/1972509">ARM64 with networking available</a></p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">uname -m
</span><span style="color:#65737e;"># aarch64
</span><span>
</span><span style="color:#bf616a;">lsusb
</span><span style="color:#65737e;"># Bus 001 Device 003: ID 0525:a4a2 Netchip Technology, Inc. Linux-USB Ethernet/RNDIS Gadget
</span><span>
</span><span style="color:#bf616a;">ip</span><span> addr
</span><span style="color:#65737e;">#2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000
</span><span style="color:#65737e;"># link/ether 40:54:00:12:34:57 brd ff:ff:ff:ff:ff:ff
</span><span style="color:#65737e;"># inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic noprefixroute eth0
</span></code></pre>
<p>We are running in arm64 mode</p>
<ul>
<li>Enabe the ssh daemon</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> systemctl enable ssh</span><span style="color:#bf616a;"> --now
</span></code></pre>
<h2 id="interact-with-the-image">Interact with the image</h2>
<p>Copy the ssh credentials over, password is <code>raspberry</code></p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">ssh-copy-id -p</span><span> 2222 pi@localhost
</span></code></pre>
<ul>
<li><a href="https://github.com/wimvanderbauwhede/limited-systems/wiki/Debian-%22buster%22-for-Raspberry-Pi-3-on-QEMU">Login to the image</a></li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">ssh -p</span><span> 2222 pi@localhost
</span></code></pre>
<p>Done!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://askubuntu.com/questions/69363/mount-single-partition-from-image-of-entire-disk-device/496576#496576">https://askubuntu.com/questions/69363/mount-single-partition-from-image-of-entire-disk-device/496576#496576</a></li>
<li><a href="https://raspberrypi.stackexchange.com/questions/100384/running-raspbian-buster-with-qemu">https://raspberrypi.stackexchange.com/questions/100384/running-raspbian-buster-with-qemu</a></li>
<li><a href="https://github.com/raspberrypi/firmware">https://github.com/raspberrypi/firmware</a></li>
<li><a href="https://www.raspberrypi.org/forums/viewtopic.php?t=195565&start=50">https://www.raspberrypi.org/forums/viewtopic.php?t=195565&start=50</a></li>
<li><a href="https://bugs.launchpad.net/qemu/+bug/1772165">https://bugs.launchpad.net/qemu/+bug/1772165</a></li>
<li><a href="https://lore.kernel.org/qemu-devel/20200428022232.18875-1-pauldzim@gmail.com/">https://lore.kernel.org/qemu-devel/20200428022232.18875-1-pauldzim@gmail.com/</a></li>
<li><a href="https://metebalci.com/blog/bare-metal-rpi3-network-boot/">https://metebalci.com/blog/bare-metal-rpi3-network-boot/</a></li>
</ul>
How to run latest Node on an emulated RevPi2020-10-24T00:00:00+00:002020-12-22T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/how-run-latest-node-emulated-revpi/<ul>
<li>Download the latest RevPi Stretch image, based on Raspbian Buster
<a href="https://revolution.kunbus.de/forum/viewtopic.php?f=17&t=2155">kunbus_release</a>
from the <a href="https://revolution.kunbus.de/shop/en/stretch">official site</a></li>
<li>Download <code>kernel-qemu-4.19.50-buster</code> and <code>versatile-pb-buster.dtb</code> from
<a href="https://github.com/dhruvvyas90/qemu-rpi-kernel">dhruvvyas90/qemu-rpi-kernel</a></li>
<li>Install required tools</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -S</span><span> qemu unzip qemu-arch-extra
</span></code></pre>
<ul>
<li>Extract the image</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">unzip </span><span><revpi-image>.zip
</span></code></pre>
<ul>
<li>Convert .img to .qcow2</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">qemu-img</span><span> convert</span><span style="color:#bf616a;"> -f</span><span> raw</span><span style="color:#bf616a;"> -O</span><span> qcow2 <revpi-image>.img <revpi-image>.qcow2
</span></code></pre>
<p>Adjust the size of the image as needed</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">qemu-img</span><span> resize <revpi-image>.qcow2 4GB
</span></code></pre>
<ul>
<li>Boot it up with QEMU</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> qemu-system-arm</span><span style="color:#bf616a;"> -kernel</span><span> kernel-qemu-4.19.50-buster \
</span><span style="color:#bf616a;"> -dtb</span><span> versatile-pb-buster.dtb \
</span><span style="color:#bf616a;"> -m</span><span> 256</span><span style="color:#bf616a;"> -cpu</span><span> arm1176 \
</span><span style="color:#bf616a;"> -machine</span><span> versatilepb \
</span><span style="color:#bf616a;"> -hda</span><span> 2020-06-25-revpi-stretch.qcow2 \
</span><span style="color:#bf616a;"> -append </span><span>"</span><span style="color:#a3be8c;">root=/dev/sda2</span><span>"
</span></code></pre>
<p>As a side note,
<a href="https://wiki.qemu.org/Documentation/Platforms/ARM#Generic_ARM_system_emulation_with_the_virt_machine">versatilepb machine only allows for 256 MB of RAM</a>.</p>
<ul>
<li>Remove the
<a href="https://revolution.kunbus.com/forum/viewtopic.php?f=6&t=2044">prompt for inserting serial number</a>:</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">touch</span><span> /home/pi/.revpi-factory-reset
</span></code></pre>
<p>You can learn more about the process by observing <code>piserial</code> package</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">dpkg -l</span><span> piserial
</span></code></pre>
<ul>
<li>Change the hostname</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> hostnamectl set-hostname revpi
</span></code></pre>
<ul>
<li>Edit <code>/etc/hosts</code> manually and and the hostname there</li>
</ul>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>127.0.0.1 localhost
</span><span>::1 localhost ip6-localhost ip6-loopback
</span><span>ff02::1 ip6-allnodes
</span><span>ff02::2 ip6-allrouters
</span><span>
</span><span># set local hostname here
</span><span>127.0.1.1 revpi
</span></code></pre>
<h2 id="interacting-with-the-guest">Interacting with the guest</h2>
<ul>
<li>Back on the host, install virt-manager</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -S</span><span> virt-manager ebtables dnsmasq bridge-utils openbsd-netcat
</span></code></pre>
<ul>
<li>Add yourself to the necessary groups</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> usermod</span><span style="color:#bf616a;"> -aG</span><span> kvm,libvirt username
</span></code></pre>
<ul>
<li>Edit
<a href="https://superuser.com/questions/298426/kvm-image-failed-to-start-with-virsh-permission-denied">kvm permissions</a>
in <code>/etc/libvirt/qemu.conf</code>:</li>
</ul>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>user = "username"
</span><span>group = "kvm"
</span></code></pre>
<ul>
<li>Define a
<a href="https://wiki.libvirt.org/page/Networking#NAT_forwarding_.28aka_.22virtual_networks.22.29">default network</a>:</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> virsh net-define /etc/libvirt/qemu/networks/default.xml
</span><span style="color:#bf616a;">sudo</span><span> virsh net-autostart default
</span><span style="color:#bf616a;">sudo</span><span> virsh net-start default
</span></code></pre>
<p>Check that the network virbr0 is present</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">brctl</span><span> show </span><span style="color:#65737e;"># virbr0
</span></code></pre>
<ul>
<li>Logout and log back in, then start the daemon</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> systemctl start libvirtd.service
</span></code></pre>
<ul>
<li>Create a domain file <code>/tmp/revpi.xml</code>, a courtesy of
<a href="https://kimtinh.gitlab.io/post/tech/2019_05_20_qemu_rpi/#gsc.tab=0">kim tinh</a></li>
</ul>
<pre data-lang="xml" style="background-color:#2b303b;color:#c0c5ce;" class="language-xml "><code class="language-xml" data-lang="xml"><span><</span><span style="color:#bf616a;">domain </span><span style="color:#d08770;">type</span><span>='</span><span style="color:#a3be8c;">qemu</span><span>'>
</span><span> <</span><span style="color:#bf616a;">name</span><span>>rpi</</span><span style="color:#bf616a;">name</span><span>>
</span><span> <</span><span style="color:#bf616a;">uuid</span><span>>da70087f-7142-42dc-9975-00b7fa5c8435</</span><span style="color:#bf616a;">uuid</span><span>>
</span><span> <</span><span style="color:#bf616a;">memory </span><span style="color:#d08770;">unit</span><span>='</span><span style="color:#a3be8c;">KiB</span><span>'>262144</</span><span style="color:#bf616a;">memory</span><span>>
</span><span> <</span><span style="color:#bf616a;">currentMemory </span><span style="color:#d08770;">unit</span><span>='</span><span style="color:#a3be8c;">KiB</span><span>'>262144</</span><span style="color:#bf616a;">currentMemory</span><span>>
</span><span> <</span><span style="color:#bf616a;">os</span><span>>
</span><span> <</span><span style="color:#bf616a;">type </span><span style="color:#d08770;">arch</span><span>='</span><span style="color:#a3be8c;">armv6l</span><span>' </span><span style="color:#d08770;">machine</span><span>='</span><span style="color:#a3be8c;">versatilepb</span><span>'>hvm</</span><span style="color:#bf616a;">type</span><span>>
</span><span> <</span><span style="color:#bf616a;">kernel</span><span>>/path/to/kernel-qemu-4.19.50-buster</</span><span style="color:#bf616a;">kernel</span><span>> </span><span style="color:#65737e;"><!--update path here-->
</span><span> <</span><span style="color:#bf616a;">cmdline</span><span>>root=/dev/sda2</</span><span style="color:#bf616a;">cmdline</span><span>>
</span><span> <</span><span style="color:#bf616a;">dtb</span><span>>/path/to/versatile-pb-buster.dtb</</span><span style="color:#bf616a;">dtb</span><span>> </span><span style="color:#65737e;"><!--update path here-->
</span><span> <</span><span style="color:#bf616a;">boot </span><span style="color:#d08770;">dev</span><span>='</span><span style="color:#a3be8c;">hd</span><span>'/>
</span><span> </</span><span style="color:#bf616a;">os</span><span>>
</span><span> <</span><span style="color:#bf616a;">cpu </span><span style="color:#d08770;">mode</span><span>='</span><span style="color:#a3be8c;">custom</span><span>' </span><span style="color:#d08770;">match</span><span>='</span><span style="color:#a3be8c;">exact</span><span>' </span><span style="color:#d08770;">check</span><span>='</span><span style="color:#a3be8c;">none</span><span>'>
</span><span> <</span><span style="color:#bf616a;">model </span><span style="color:#d08770;">fallback</span><span>='</span><span style="color:#a3be8c;">forbid</span><span>'>arm1176</</span><span style="color:#bf616a;">model</span><span>>
</span><span> </</span><span style="color:#bf616a;">cpu</span><span>>
</span><span> <</span><span style="color:#bf616a;">devices</span><span>>
</span><span> <</span><span style="color:#bf616a;">emulator</span><span>>/usr/bin/qemu-system-arm</</span><span style="color:#bf616a;">emulator</span><span>>
</span><span> <</span><span style="color:#bf616a;">disk </span><span style="color:#d08770;">type</span><span>='</span><span style="color:#a3be8c;">file</span><span>' </span><span style="color:#d08770;">device</span><span>='</span><span style="color:#a3be8c;">disk</span><span>'>
</span><span> <</span><span style="color:#bf616a;">driver </span><span style="color:#d08770;">name</span><span>='</span><span style="color:#a3be8c;">qemu</span><span>' </span><span style="color:#d08770;">type</span><span>='</span><span style="color:#a3be8c;">qcow2</span><span>'/>
</span><span> <</span><span style="color:#bf616a;">source </span><span style="color:#d08770;">file</span><span>='</span><span style="color:#a3be8c;">/path/to/.qcow2</span><span>'/> </span><span style="color:#65737e;"><!--update path here-->
</span><span> <</span><span style="color:#bf616a;">backingStore</span><span>/>
</span><span> <</span><span style="color:#bf616a;">target </span><span style="color:#d08770;">dev</span><span>='</span><span style="color:#a3be8c;">sda</span><span>' </span><span style="color:#d08770;">bus</span><span>='</span><span style="color:#a3be8c;">scsi</span><span>'/>
</span><span> <</span><span style="color:#bf616a;">address </span><span style="color:#d08770;">type</span><span>='</span><span style="color:#a3be8c;">drive</span><span>' </span><span style="color:#d08770;">controller</span><span>='</span><span style="color:#a3be8c;">0</span><span>' </span><span style="color:#d08770;">bus</span><span>='</span><span style="color:#a3be8c;">0</span><span>' </span><span style="color:#d08770;">target</span><span>='</span><span style="color:#a3be8c;">0</span><span>' </span><span style="color:#d08770;">unit</span><span>='</span><span style="color:#a3be8c;">0</span><span>'/>
</span><span> </</span><span style="color:#bf616a;">disk</span><span>>
</span><span> <</span><span style="color:#bf616a;">controller </span><span style="color:#d08770;">type</span><span>='</span><span style="color:#a3be8c;">pci</span><span>' </span><span style="color:#d08770;">index</span><span>='</span><span style="color:#a3be8c;">0</span><span>' </span><span style="color:#d08770;">model</span><span>='</span><span style="color:#a3be8c;">pci-root</span><span>'/>
</span><span> <</span><span style="color:#bf616a;">interface </span><span style="color:#d08770;">type</span><span>='</span><span style="color:#a3be8c;">bridge</span><span>'>
</span><span> <</span><span style="color:#bf616a;">mac </span><span style="color:#d08770;">address</span><span>='</span><span style="color:#a3be8c;">52:54:00:ed:eb:c7</span><span>'/>
</span><span> <</span><span style="color:#bf616a;">source </span><span style="color:#d08770;">bridge</span><span>='</span><span style="color:#a3be8c;">virbr0</span><span>'/>
</span><span> <</span><span style="color:#bf616a;">model </span><span style="color:#d08770;">type</span><span>='</span><span style="color:#a3be8c;">virtio</span><span>'/>
</span><span> <</span><span style="color:#bf616a;">address </span><span style="color:#d08770;">type</span><span>='</span><span style="color:#a3be8c;">pci</span><span>' </span><span style="color:#d08770;">domain</span><span>='</span><span style="color:#a3be8c;">0x0000</span><span>' </span><span style="color:#d08770;">bus</span><span>='</span><span style="color:#a3be8c;">0x00</span><span>' </span><span style="color:#d08770;">slot</span><span>='</span><span style="color:#a3be8c;">0x06</span><span>' </span><span style="color:#d08770;">function</span><span>='</span><span style="color:#a3be8c;">0x0</span><span>'/>
</span><span> </</span><span style="color:#bf616a;">interface</span><span>>
</span><span> <</span><span style="color:#bf616a;">graphics </span><span style="color:#d08770;">type</span><span>='</span><span style="color:#a3be8c;">spice</span><span>' </span><span style="color:#d08770;">autoport</span><span>='</span><span style="color:#a3be8c;">yes</span><span>'>
</span><span> <</span><span style="color:#bf616a;">listen </span><span style="color:#d08770;">type</span><span>='</span><span style="color:#a3be8c;">address</span><span>'/>
</span><span> <</span><span style="color:#bf616a;">image </span><span style="color:#d08770;">compression</span><span>='</span><span style="color:#a3be8c;">off</span><span>'/>
</span><span> <</span><span style="color:#bf616a;">gl </span><span style="color:#d08770;">enable</span><span>='</span><span style="color:#a3be8c;">no</span><span>' </span><span style="color:#d08770;">rendernode</span><span>='</span><span style="color:#a3be8c;">/dev/dri/by-path/pci-0000:00:02.0-render</span><span>'/>
</span><span> </</span><span style="color:#bf616a;">graphics</span><span>>
</span><span> <</span><span style="color:#bf616a;">video</span><span>>
</span><span> <</span><span style="color:#bf616a;">model </span><span style="color:#d08770;">type</span><span>='</span><span style="color:#a3be8c;">virtio</span><span>' </span><span style="color:#d08770;">heads</span><span>='</span><span style="color:#a3be8c;">1</span><span>' </span><span style="color:#d08770;">primary</span><span>='</span><span style="color:#a3be8c;">yes</span><span>'/>
</span><span> <</span><span style="color:#bf616a;">address </span><span style="color:#d08770;">type</span><span>='</span><span style="color:#a3be8c;">pci</span><span>' </span><span style="color:#d08770;">domain</span><span>='</span><span style="color:#a3be8c;">0x0000</span><span>' </span><span style="color:#d08770;">bus</span><span>='</span><span style="color:#a3be8c;">0x00</span><span>' </span><span style="color:#d08770;">slot</span><span>='</span><span style="color:#a3be8c;">0x05</span><span>' </span><span style="color:#d08770;">function</span><span>='</span><span style="color:#a3be8c;">0x0</span><span>'/>
</span><span> </</span><span style="color:#bf616a;">video</span><span>>
</span><span> </</span><span style="color:#bf616a;">devices</span><span>>
</span><span></</span><span style="color:#bf616a;">domain</span><span>>
</span></code></pre>
<p>Change the path of <code>kernel-qemu-4.19.50-buster,</code> <code>versatile-pb-buster.dtb</code>
and <code>.qcow2</code> file to the correct absolute path</p>
<ul>
<li>Create a new domain for RevPi</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> virsh define /tmp/revpi.sh
</span></code></pre>
<ul>
<li>Start the domain</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> virsh start revpi
</span></code></pre>
<ul>
<li>Make sure the avahi-daemon is started</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> systemctl start avahi-daemon
</span></code></pre>
<ul>
<li>Copy ssh credentials over, the password is <code>raspberry</code></li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">ssh-copy-id</span><span> pi@revpi.local
</span></code></pre>
<ul>
<li>Interact with the emulated image</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">ssh</span><span> pi@revpi.local
</span></code></pre>
<h2 id="installing-latest-node">Installing latest Node</h2>
<p>The guest has node 10.19.0
<a href="https://revolution.kunbus.com/download/3135/">present via RevPi backports</a>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">node -v </span><span style="color:#65737e;"># 10.19.0
</span></code></pre>
<p>Check the architecture</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">uname -m </span><span style="color:#65737e;"># armv6l
</span></code></pre>
<p>Emulated arm1176 CPU is unfortunately an <strong>armv6l</strong> architecture, which is
not officially supported by node</p>
<ul>
<li>Install <a href="https://github.com/nvm-sh/nvm#installing-and-updating">nvm</a>:</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">curl -o-</span><span> https://raw.githubusercontent.com/nvm-sh/nvm/v0.36.0/install.sh | </span><span style="color:#bf616a;">bash
</span></code></pre>
<p>Make sure that installation was successful</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#96b5b4;">command </span><span style="color:#bf616a;">-v</span><span> nvm
</span></code></pre>
<ul>
<li>nvm does not support <strong>armv6l</strong> either, use
<a href="https://github.com/nodejs/unofficial-builds/issues/4">unofficial nvm mirror</a>:</li>
</ul>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">NVM_NODEJS_ORG_MIRROR</span><span>=</span><span style="color:#a3be8c;">https://unofficial-builds.nodejs.org/download/release </span><span style="color:#bf616a;">nvm</span><span> install 14.9
</span></code></pre>
<p>Done!</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://blog.agchapman.com/using-qemu-to-emulate-a-raspberry-pi/">https://blog.agchapman.com/using-qemu-to-emulate-a-raspberry-pi/</a></li>
<li><a href="https://azeria-labs.com/emulate-raspberry-pi-with-qemu/">https://azeria-labs.com/emulate-raspberry-pi-with-qemu/</a></li>
<li><a href="https://ownyourbits.com/2017/02/06/raspbian-on-qemu-with-network-access/">https://ownyourbits.com/2017/02/06/raspbian-on-qemu-with-network-access/</a></li>
<li><a href="http://pub.phyks.me/respawn/mypersonaldata/public/2014-05-20-11-08-01/">http://pub.phyks.me/respawn/mypersonaldata/public/2014-05-20-11-08-01/</a></li>
<li><a href="https://github.com/meadowface/raspbian-qemu">https://github.com/meadowface/raspbian-qemu</a></li>
<li><a href="https://wiki.debian.org/KVM#Troubleshooting">https://wiki.debian.org/KVM#Troubleshooting</a></li>
<li><a href="https://help.ubuntu.com/community/KVM/Installation">https://help.ubuntu.com/community/KVM/Installation</a></li>
</ul>
How not to create a Node executable for ARM2020-09-09T00:00:00+00:002020-12-22T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/how-not-create-node-executable-arm/<p>There is one thing I would like to do properly for a quite some time:</p>
<p><strong>Create executable NodeJS application on an x64 machine for ARMv6</strong></p>
<p>Now I have mastered the process to some extent, with the following steps:</p>
<ol>
<li>Write an app on a host x64 machine</li>
<li><code>scp</code> sources to the target ARMv6 machine (i.e. Raspberry pi based
computer)</li>
<li>Make an executable with <a href="https://github.com/vercel/pkg">pkg</a> by sending
a command commands over <code>ssh</code></li>
<li>Delete the sources on the target machine</li>
<li><strong>OPTIONALLY:</strong> Run a native <code>systemd</code> or
<a href="https://github.com/vercel/pkg">pm2</a> service that keeps the app running</li>
</ol>
<p>This approach is automated to a single <code>npm run</code> command at this point.
However it has several drawbacks, which I'd like to remove over time:</p>
<ol>
<li>It requires the machine with the target architecture running and
ssh-able (this could be replaced with QEMU and/or Docker, but I have not
gotten so far yet)</li>
<li>De-bugging is less straightforward</li>
<li>It unnecessarily transfers sources, some of them coule be accidentally
left there</li>
<li>It requires <code>node_modules/</code> on the target machine - they can be several
tens times larger than the actual app (which bundles the node executable
in a range of 35 ~ 70 MB, depending on version an architecture)</li>
</ol>
<p>Of course it would be easier to just build the executable and tansfer it to
the target machine. C, Go and Rust can do it without much hassle.</p>
<h2 id="preparation">Preparation</h2>
<p>Multiple poeple would like to do the cross-compiling to other-architecture
as well, looking at <a href="https://github.com/vercel/pkg/issues/136">#136</a>,
<a href="https://github.com/vercel/pkg/issues/145">#145</a>,
<a href="https://github.com/vercel/pkg/issues/363">#363</a>,
<a href="https://github.com/vercel/pkg/issues/605">#605</a>,
<a href="https://github.com/vercel/pkg/issues/784">#784</a> among other sources. One
solution is to obtain a
<a href="https://github.com/robertsLando/pkg-binaries/releases/tag/v1.0.0">binary</a>
for a target architecture. The repository also allows you to prepare the
binaries on your machine using a Docker image, although users reported that
the process takes 10 hours or more to complete.</p>
<p>I have built it in the past to see what happens, but currently the
repository already contains a lot of prebuilt binaries, the
<code>fetched-v14.4.0-linux-armv6</code> should suit me well enough, by saving it in
the right location (currently <code>~/.pkg-cache/v2.6/</code>). Let's try:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pkg -t</span><span> arm64 app.js
</span><span>> pkg@4.4.9
</span><span>> Warning </span><span style="color:#bf616a;">Failed</span><span> to make bytecode node14-arm64 for file /snapshot/test/app.js
</span></code></pre>
<h2 id="what-does-not-work">What does not work</h2>
<p>First solution suggested in the foremetioned issue threads is to use
<code>--no-bytecode</code>. The results are unsatisfactory:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pkg -t</span><span> arm64 app.js</span><span style="color:#bf616a;"> --no-bytecode
</span><span>> pkg@4.4.9
</span><span>> Error! </span><span style="color:#bf616a;">--no-bytecode</span><span> and no source breaks final executable
</span><span> </span><span style="color:#bf616a;">/home/peterbabic/app.js
</span><span> </span><span style="color:#bf616a;">Please</span><span> run with "</span><span style="color:#a3be8c;">-d</span><span>" and without "</span><span style="color:#a3be8c;">--no-bytecode</span><span>" first, and make
</span><span> </span><span style="color:#bf616a;">sure</span><span> that debug log does not contain "</span><span style="color:#a3be8c;">was included as bytecode</span><span>".
</span></code></pre>
<p>Does <code>-d</code>, which stands for <code>--debug</code> outpus useful information? Well, we
get to it in a minute.</p>
<h2 id="adding-an-architecture">Adding an architecture</h2>
<p>Digging deeper, Debian based distributions have an apparent solutions in
adding an architecture libraries from a repository:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">dpkg --add-architecture</span><span> i386
</span><span style="color:#bf616a;">apt-get
</span><span style="color:#bf616a;">apt-get</span><span> install</span><span style="color:#bf616a;"> -y</span><span> libc6:i386 libstdc++6:i386
</span></code></pre>
<p>Unfortunately, there is no readily available equivalent command set on an
Arch Linux. I am tempted to try it in a VM. But for now, I will expplain
steps that I went through trying to make cross-compilation run.</p>
<h2 id="building-arm-binaries-on-amdx64">Building ARM binaries on AMDx64</h2>
<p>Steps to actually compile an executable ARMx86 (ARMv6/ARMv7) and ARMx64
(ARMv8) binary on my laptop, which has a 64-bit Intel notebook starts with
a toolchain.</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">yay -S</span><span> arm-linux-gnueabihf-gcc
</span></code></pre>
<p>As of writing, the package is flagged out of date and won't install. Not
good. There is also a x64 toolchain available, it is supported by a
compolier named <code>aarch64-linux-gnu-gcc</code>. I did not know which package it
belonged to, so I run this command:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">pacman -Fq</span><span> aarch64-linux-gnu-gcc | </span><span style="color:#bf616a;">sudo</span><span> pacman</span><span style="color:#bf616a;"> -S</span><span> -
</span></code></pre>
<p>Yeah, the package has the same name as a command. What was I thinking.
Nevermind, it was just a few unnecessary keystrokes, but now we can build a
Hello World! for ARM x64 on an x86_64 machine:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">aarch64-linux-gnu-gcc</span><span> hello.c</span><span style="color:#bf616a;"> -o</span><span> hello
</span></code></pre>
<h2 id="running-arm-binaries-on-amdx64">Running ARM binaries on AMDx64</h2>
<p>Running it straight away won't work:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">$</span><span> file hello
</span><span style="color:#bf616a;">hello:</span><span> ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV)</span><span style="color:#bf616a;">,</span><span> dynamically linked, interpreter /lib/ld-linux-aarch64.so.1, BuildID</span><span style="color:#b48ead;">[</span><span>sha1</span><span style="color:#b48ead;">]</span><span>=12eca1ab69cdf6c78169cb8a9c86cf21ea8c5873, for GNU/Linux 3.7.0, not stripped
</span><span>
</span><span style="color:#bf616a;">$</span><span> ./hello
</span><span style="color:#bf616a;">zsh:</span><span> exec format error: ./cross
</span></code></pre>
<p>We need <code>qemu-aarch64</code> to run ARM aarch64 executable:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>pacman -Fq qemu-aarch64 | sudo pacman -S -
</span></code></pre>
<p>Run our cross compiled hello executable:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">qemu-aarch64</span><span> hello
</span></code></pre>
<p>The executable should greet us, insted of displaying an error.</p>
<h2 id="running-a-pre-compiled-nodejs-arm-x64-executable">Running a pre-compiled NodeJS ARM x64 executable</h2>
<p>With our newly acquired knowledge, we can try to run the node executable
from the beginning that will be bundled with <code>pkg</code>. Since our toolcahin for
ARM x86 is not currently working, for a try, we download x64 one.</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">wget</span><span> https://github.com/robertsLando/pkg-binaries/releases/download/v1.0.0/fetched-v14.4.0-linux-arm64</span><span style="color:#bf616a;"> -P ~</span><span>/.pkg-cache/v2.6
</span></code></pre>
<p>As a side note, it probably needs a little time till the ARM x64 will be
widespread. Raspberry pi 4 already touches that problem, although I did not
touch one yet. But beigh prepared for the future is soemtimes also worth
it.</p>
<p>Changing into the download directory and examining the file gives us the
expected ARM aarch64:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">$</span><span> file fetched-v14.4.0-linux-arm64
</span><span style="color:#bf616a;">fetched-v14.4.0-linux-arm64:</span><span> ELF 64-bit LSB pie executable, ARM aarch64, version 1 (GNU/Linux)</span><span style="color:#bf616a;">,</span><span> dynamically linked, interpreter /lib/ld-linux-aarch64.so.1, for GNU/Linux 3.7.0, BuildID</span><span style="color:#b48ead;">[</span><span>sha1</span><span style="color:#b48ead;">]</span><span>=c80da3252b3b6bc0dedfa29f77b38de5f55e771e, with debug_info, not stripped
</span></code></pre>
<p>Running this binary should not work:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">$</span><span> ./fetched-v14.4.0-linux-arm64
</span><span style="color:#bf616a;">zsh:</span><span> exec format error: ./fetched-v14.4.0-linux-arm64
</span></code></pre>
<p>What was not expected is that running it with <code>qemu-aarch64</code>, which proved
fruitful before also fails:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">$</span><span> qemu-aarch64 fetched-v14.4.0-linux-arm64
</span><span style="color:#bf616a;">/lib/ld-linux-aarch64.so.1:</span><span> No such file or directory
</span></code></pre>
<p>What package provides such a file?</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">$</span><span> pacman</span><span style="color:#bf616a;"> -F</span><span> ld-linux-aarch64.so.1
</span><span style="color:#bf616a;">community/aarch64-linux-gnu-glibc</span><span> 2.32-1 </span><span style="color:#b48ead;">[</span><span>installed</span><span style="color:#b48ead;">]
</span><span> </span><span style="color:#bf616a;">usr/aarch64-linux-gnu/lib/ld-linux-aarch64.so.1
</span></code></pre>
<p>Package <code>aarch64-linux-gnu-glibc</code> was installed a few steps back alongside
the cross-compiler <code>aarch64-linux-gnu-gcc</code>, as seen here:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">$</span><span> pacman</span><span style="color:#bf616a;"> -Si</span><span> aarch64-linux-gnu-glibc | </span><span style="color:#bf616a;">rg</span><span> Required
</span><span style="color:#bf616a;">Required</span><span> By : aarch64-linux-gnu-gcc
</span></code></pre>
<p>Quick checks for sanity, file really exists:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">$</span><span> file /usr/aarch64-linux-gnu/lib/ld-linux-aarch64.so.1
</span><span style="color:#bf616a;">/usr/aarch64-linux-gnu/lib/ld-linux-aarch64.so.1:</span><span> symbolic link to ld-2.32.so
</span></code></pre>
<p>And the one that the executable wanted does not:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">$</span><span> file /lib/ld-linux-aarch64.so.1
</span><span style="color:#bf616a;">/lib/ld-linux-aarch64.so.1:</span><span> cannot open `</span><span style="color:#bf616a;">/lib/ld-linux-aarch64.so.1</span><span>'</span><span style="color:#a3be8c;"> (No such file or directory)
</span></code></pre>
<p>The obvious dirty solution, that would pollute your machine's <code>/lib</code> with
libraries for different architectures, and would probably fail later on
with more dependencies would be copy (worse) or symlink (better):</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">sudo</span><span> ln</span><span style="color:#bf616a;"> -s</span><span> /usr/aarch64-linux-gnu/lib/ld-linux-aarch64.so.1 /lib
</span></code></pre>
<p>The precompiled binary should run now. Remove the symlink, if you tried.
There is a better solution:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">qemu-aarch64 -L</span><span> /usr/aarch64-linux-gnu/ fetched-v14.4.0-linux-arm64
</span></code></pre>
<p>This way, qemu knows where to look for the libraries. You can make it an
alias to shorten it and call it done, if you woud just like to run the
binaries via command. This is however not what is our goal here, remember?
We need to find a more global way to tell the emulator where are the
required libraries located. One way to do that is to provide this
information via an environment variable <code>QEMU_LD_PREFIX</code>, which is
equivalent to the <code>-L</code> parameter:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">QEMU_LD_PREFIX</span><span>=</span><span style="color:#a3be8c;">/usr/aarch64-linux-gnu/ </span><span style="color:#bf616a;">qemu-aarch64</span><span> fetched-v14.4.0-linux-arm64
</span></code></pre>
<p>If we only use qemu for one architecture at a time, we can export the
variable:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#b48ead;">export </span><span style="color:#bf616a;">QEMU_LD_PREFIX</span><span>=</span><span style="color:#a3be8c;">/usr/aarch64-linux-gnu/
</span></code></pre>
<p>With the variable exported we can now run the pre-built binary:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">$</span><span> qemu-aarch64 fetched-v14.4.0-linux-arm64
</span><span style="color:#bf616a;">internal/validators.js:121
</span><span> </span><span style="color:#bf616a;">throw</span><span> new ERR_INVALID_ARG_TYPE(name, '</span><span style="color:#a3be8c;">string</span><span>', value);
</span><span> </span><span style="color:#bf616a;">^
</span><span>
</span><span style="color:#bf616a;">TypeError </span><span style="color:#b48ead;">[</span><span>ERR_INVALID_ARG_TYPE</span><span style="color:#b48ead;">]</span><span>: The "</span><span style="color:#a3be8c;">path</span><span>" argument must be of type string. Received undefined
</span><span> </span><span style="color:#bf616a;">at</span><span> validateString (internal/validators.js:121:11)
</span><span> </span><span style="color:#bf616a;">at</span><span> Object.resolve (path.js:980:7)
</span><span> </span><span style="color:#bf616a;">at</span><span> resolveMainPath (internal/modules/run_main.js:12:40)
</span><span> </span><span style="color:#bf616a;">at</span><span> Function.executeUserEntryPoint </span><span style="color:#b48ead;">[</span><span>as runMain</span><span style="color:#b48ead;">]</span><span> (internal/modules/run_main.js:65:24)
</span><span> </span><span style="color:#bf616a;">at</span><span> internal/main/run_main_module.js:17:47 {
</span><span> code: '</span><span style="color:#a3be8c;">ERR_INVALID_ARG_TYPE</span><span>'
</span><span>}
</span></code></pre>
<p>This sure looks familiar to the NodeJS users, doesn't it? We have in fact
made it run. The reason it probably failed is because it is not bundled
yet, which would actually include a code to run, which it cannot find yet.</p>
<p>Sadly, even if we can now run this binary on our host machine, the <code>pkg</code>
command in a development directory would still fail:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">$</span><span> npx pkg</span><span style="color:#bf616a;"> -t</span><span> arm64 app.js</span><span style="color:#bf616a;"> -d
</span><span>
</span><span style="color:#bf616a;">...</span><span> long output omitted ...
</span><span>
</span><span style="color:#bf616a;">/home/peterbabic/.pkg-cache/v2.6/fetched-v14.4.0-linux-arm64:</span><span> /home/peterbabic/.pkg-cache/v2.6/fetched-v14.4.0-linux-arm64: cannot execute binary file
</span><span style="color:#bf616a;">/home/peterbabic/.pkg-cache/v2.6/fetched-v14.4.0-linux-arm64:</span><span> /home/peterbabic/.pkg-cache/v2.6/fetched-v14.4.0-linux-arm64: cannot execute binary file
</span><span>> Warning </span><span style="color:#bf616a;">Failed</span><span> to make bytecode node14-arm64 for file /snapshot/app.js
</span></code></pre>
<p>We see that it corectly calls the binary that we were ale to run separately
just a few moments before, but now the problem is that the pkg has now way
to know that it needs to call <code>qemu-aarch64</code> to execute that binary
transparently. For this, we need to setup <code>binfmt</code>.</p>
<h2 id="transparently-execute-an-alien-binary">Transparently execute an alien binary</h2>
<p>I have found that to run alien binaries natievely, I can do this:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">yay -S</span><span> binfmt-qemu-static
</span></code></pre>
<p>It also has an optional package worth noting, that I keep istall as well:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">$</span><span> yay</span><span style="color:#bf616a;"> -Si</span><span> binfmt-qemu-static | </span><span style="color:#bf616a;">rg</span><span> Optional
</span><span style="color:#bf616a;">Optional</span><span> Deps : qemu-user-static
</span></code></pre>
<p>Now with <code>QEMU_LD_PREFIX</code> in place, we can run the pre-compolied binary
like this:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">~/.pkg-cache/v2.6/fetched-v14.4.0-linux-arm64
</span></code></pre>
<p>Yet, this does not allow us to run <code>pkg</code>.</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">$</span><span> npx pkg</span><span style="color:#bf616a;"> -t</span><span> arm64 app.js</span><span style="color:#bf616a;"> -d
</span><span>
</span><span style="color:#bf616a;">...</span><span> long output omitted ...
</span><span>
</span><span style="color:#bf616a;">/lib/ld-linux-aarch64.so.1:</span><span> No such file or directory
</span><span style="color:#bf616a;">/lib/ld-linux-aarch64.so.1:</span><span> No such file or directory
</span><span>> Warning </span><span style="color:#bf616a;">Failed</span><span> to make bytecode node14-arm64 for file /snapshot/app.js
</span></code></pre>
<p>The things start to get blurry for me around this point, because
<code>QEMU_LD_PREFIX</code> seems to be ignored when <code>pkg</code> needs it (it is being run
via <code>npm</code>/<code>npx</code>/<code>pkg</code>, either of which does not provide any infor with a
<code>ldd</code> command).</p>
<p>I was able to move on by resorting to symlinking the library to <code>/lib</code>
mentioned before, to move further:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">$</span><span> npx pkg</span><span style="color:#bf616a;"> -t</span><span> arm64 app.js</span><span style="color:#bf616a;"> -d
</span><span>
</span><span style="color:#bf616a;">...</span><span> long output omitted ...
</span><span>
</span><span style="color:#bf616a;">/home/peterbabic/.pkg-cache/v2.6/fetched-v14.4.0-linux-arm64:</span><span> error while loading shared libraries: libdl.so.2: cannot open shared object file: No such file or directory
</span><span style="color:#bf616a;">/home/peterbabic/.pkg-cache/v2.6/fetched-v14.4.0-linux-arm64:</span><span> error while loading shared libraries: libdl.so.2: cannot open shared object file: No such file or directory
</span><span>> Warning </span><span style="color:#bf616a;">Failed</span><span> to make bytecode node14-arm64 for file /snapshot/v2.6/app.js
</span></code></pre>
<p>This is the dead end for me. Now matter what symlink voodoo I tried, it
refused to find that file, comfortably at location
<code>/usr/aarch64-linux-gnu/lib/libdl.so.2</code>. My machine also has a
<code>/lib/libdl.so.2</code> file, which is of course compild for x86_64, so
symnlinking is definitely risky and there has to be other way. If you know
more, please let me know.</p>
<h2 id="side-note">Side note</h2>
<p>Also, it occasionally hangs on missing <code>libstdc++.so.6</code>. This library is
present in <code>/usr/aarch64-linux-gnu/lib64</code>. Searching the Internets high and
low I have found a hacky solution:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#b48ead;">export </span><span style="color:#bf616a;">LD_LIBRARY_PATH</span><span>=</span><span style="color:#a3be8c;">/usr/aarch64-linux-gnu/lib64
</span></code></pre>
<p>But this env variable should be avoided, because of reasons I did not fully
comprehend yet.</p>
<h2 id="conclusion">Conclusion</h2>
<p>The process to prepare a working binary of a NodeJS application on AMDx64
machine for an ARMx86 architecture would allow me for a faster build cycle.
Unfortunately, No matter what I tried so far, the solution seems to elude
me.</p>
<p>The journey documented in this article served for as a rich educational
course for me, so it is not all lost. Hopefully, you find something
interesting here as well.</p>
<h2 id="references">References</h2>
<ul>
<li><a href="https://github.com/robertsLando/pkg-binaries">https://github.com/robertsLando/pkg-binaries</a></li>
<li><a href="https://gist.github.com/bruce30262/e0f12eddea638efe7332">https://gist.github.com/bruce30262/e0f12eddea638efe7332</a></li>
<li><a href="https://gist.github.com/mikkeloscar/a85b08881c437795c1b9">https://gist.github.com/mikkeloscar/a85b08881c437795c1b9</a></li>
<li><a href="https://ownyourbits.com/author/cisquero_admin/">https://ownyourbits.com/author/cisquero_admin/</a></li>
<li><a href="https://wiki.archlinux.org/index.php/Binfmt_misc_for_Java">https://wiki.archlinux.org/index.php/Binfmt_misc_for_Java</a></li>
<li><a href="https://wiki.debian.org/QemuUserEmulation">https://wiki.debian.org/QemuUserEmulation</a></li>
</ul>
How to update Google Calendar with pre-push git hook2020-08-24T00:00:00+00:002020-12-22T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/how-update-gooogle-calendar-pre-push-hook/<p>With my online course building journey, I had to keep track of the slides
it has. My coach has set me goal and it looked like a really hard one to
reach, but it seemed doable. I wanted it to be a challenge. By some numbers
I got from the previous month, I came up to the conclusion that I can build
on average around three slides per day. I was quite comfortable with that
number but coach though different.</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>Make it four.
</span></code></pre>
<p>This was his words. He personally went similar journey before me and he
knew how to push himself. Being a coach, he also knows how to push others
that need help. Jumping from three to four slides a day does not seem like
a lot but it meant increase from 90 to 120 slides. Doing 4 slides a day, I
would be finished in around 23 days, which would leave me with exactly 7
days for other projects and relax. Now all 30 days of the upcoming month
were already occupied in my calendar.</p>
<h2 id="presentation-arrangement">Presentation arrangement</h2>
<p>I am bulding the slides by using <a href="https://pandoc.org/MANUAL.html">pandoc</a>.
Pandoc is a great piece of software that allows me to transform markdown
text into <a href="https://revealjs.com/">revealJS</a> presentation. Which is just
another great tool and if you do not know about these, just take a look.
They both contain more features than I could count.</p>
<p>One of the concepts that RevealJS provides which is not common among other
presentation engines I have tried (PowerPoint, LibreOffice Impress or LaTeX
Beamer) is that it provides two-dimensional slide arrangement. It makes a
grid with vertical and horizontal slides. All of the former have only one
dimension, or at least it held true up until time I had tried them without
much special configuration.</p>
<p>Markdowns level 1 heading (denoted by a single hashtag <code>#</code>) get's converted
to a horizontal slide. You can thing of a horizontal slide as a chapter
name. It denotes a part of the presentation, but does not bear any content
to itself. All level 2 headings (denoted by double hashtag <code>##</code>) get
converted by a vertical slide below the current horizontal one, up to the
next vertical one. Only vertical slides show content (text, bullet lists,
images, ...). It feels really logical and neat to me. As a side bonus, when
you press <code>o</code> key, it displays you a whole 2D outline of your presentation,
which looks like a grid.</p>
<h2 id="counting-slides">Counting slides</h2>
<p>Since a whole course is a one gigantic <code>.md</code> file, which of course is
stored as a text format, we can simply count all level 2 headings in that
file to get a number of slides programatically. Syntax is for <code>grep</code>:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">grep </span><span>"</span><span style="color:#a3be8c;">##</span><span>" presentation.md | </span><span style="color:#bf616a;">wc -l
</span></code></pre>
<p>And <code>the_silver_searcher</code> is the same:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">ag </span><span>"</span><span style="color:#a3be8c;">##</span><span>" presentation.md | </span><span style="color:#bf616a;">wc -l
</span></code></pre>
<p>If you do not use
<a href="https://github.com/ggreer/the_silver_searcher">the_silver_searcher</a>, it is
also one of the tools worth looking into. It claims to be much
<a href="https://geoff.greer.fm/ag/speed/">faster</a> than grep. It can save you a few
miliseconds here and there. As a side note, <code>ag</code> is only two characters,
which is half compared to <code>grep</code>. Unless you have an alias that looks
something like <code>alias gr='grep'</code>, it laso saves you keystrokes.</p>
<p>The result displays a single number that shows the count of lines
containing level 2 markdown heading, which represents number of horizontal
slides in the presentation. Since only horizontal slides have a content,
this is precisely all we need.</p>
<h2 id="google-calendar-api">Google Calendar API</h2>
<p>At first, I ust thought updating an event in a gCal would be really fast,
but I could not be further from the truth. The HTTP API
<a href="https://developers.google.com/calendar/v3/reference/events/patch">documentation</a>
for PATCH method shows tidy URL. PATCH is the method usually used in REST
API to update <em>part</em> of the entity.</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>PATCH https://www.googleapis.com/calendar/v3/calendars/calendarId/events/eventId
</span></code></pre>
<p>Yeah, nice! I just run a <code>curl</code> or it's newer cousin,
<a href="https://httpie.org/">httpie</a> request to update some information in the
Google calendar event and I am done. Well, it requires three additional
pieces of information:</p>
<ol>
<li>calendarId</li>
<li>eventId</li>
<li>authorization</li>
</ol>
<p>I had tried looking at URL browsing gCal web UI, but I could not find any
IDs there, nor anywhere in the settings. The <em>authorization</em> part was even
more complicated. If you have never enabled a Calendar API, it is a lot of
steps. I will not explain them here, because they are docuemnted by the
Google itself, and as anything Google, they are suject to (frequent)
change. What I ended up doing was enabling it and <strong>not</strong> used <code>curl</code>, nor
any other plain request tool, because handling OAuth2 <em>access</em> and
<em>refresh</em> tokens in bash would be a hassle. There is also an older API key
based way, which could work easily, but it has some limitations and it was
<a href="https://stackoverflow.com/questions/42015397/still-possible-to-use-google-calendar-api-with-api-key">not completely clear</a>
to me, if it is still a feasible option, thus I went with OAuth2 option.</p>
<h2 id="the-update-script">The update script</h2>
<p>Google Calendar API also supports multiple languages (more are being
added):</p>
<ul>
<li>Go</li>
<li>Java</li>
<li>JavaScript</li>
<li>node.js</li>
<li>PHP</li>
<li>Python</li>
<li>Ruby</li>
</ul>
<p>My course is made as a node.js project, so naturally I have followed the
provided quickstart [example](https://developers.google.com/calenda
r/quickstart/nodejs). Unfortunately, it only shows how to create an event,
not how to update the existing one. I have tried learning more about the
node API for Calendar <code>patch</code> method
<a href="https://googleapis.dev/nodejs/googleapis/latest/calendar/classes/Resource$Events.html#patch">here</a>
but somehow, I could not understand a word there. Again, StackOverflow for
the help. The
<a href="https://stackoverflow.com/questions/42842459/update-event-in-google-calendar-api-javascript">post</a>
here shows the guidelins to updating the event. By combining these three
resorces I was able to get a working script, you can have a look at the
<a href="https://github.com/peterbabic/sources-peterbabic.dev/tree/master/how-update-gooogle-calendar-pre-push-hook/hook.js">source</a>.</p>
<h2 id="bash-in-node">Bash in node</h2>
<p>If you dig through the script, you might find one another oddity: function
<code>lineCount</code>. It calls a bash command we have found earlier in a node
script. I could do it straight up in the node, so it would not be such a
tangled mess, but hey, I had it already figured out and life is short.</p>
<p>Following a Medium
<a href="https://medium.com/stackfame/how-to-run-shell-script-file-or-command-using-nodejs-b9f2455cb6b7">article</a>
provided not one but multiple already working options to do this. I have
chosen the one that uses promises. Even though the quickstart article used
callbacks, I wanted to progressively evolve the script to only use <em>arrow
functions</em> and <code>async/await</code> syntax in the end, if possible.</p>
<h2 id="git-hooks">Git hooks</h2>
<p>Git hook is a technique to call certain scripts during a git lifecycle.
There is a lot of them. You can learn more by studying sample files
<code>ls .git/hooks/*.sample</code> in your repository or by <code>man githooks</code>. Since
there is a lot of them, choosing the right one can be difficult. Initially
I have thought that I use <code>post-push</code>, but as I have later learned, such a
hook does not exist in git! The StackOverflow's
<a href="https://stackoverflow.com/questions/9038616/git-post-push-hook">post</a> was
first thing that popped up in the search results, explaining that this hook
would require the remote repository to execute the code, which is not
implemented.</p>
<p>The next thing in line was <code>pre-push</code> hook. Usually, pre-hooks are used to
perform checks before executing an action (for example checking for wording
in commit mesage). I did not need perform any checks to be performed, I
just wanted to call a script together with the <code>git push</code>. This time, it
did not matter if it was performed before or after the push, so I used it.</p>
<p>One another question was where to put hooks to be tracked by git. Putting
hooks inside <code>.git/hooks/</code> directory makes them work, but git wont list
them as untracked files. If a hook is part of the overal project, you have
to put them elsewhere. This Medium
<a href="https://medium.com/@anandmohit7/improving-development-workflow-using-git-hooks-8498f5aa3345">article</a>
outlines a concept to put tehme inside <code>hooks/</code> directory and symlink them
to <code>.git</code> repository.</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">mkdir</span><span> hooks
</span><span style="color:#96b5b4;">cd</span><span> hooks
</span><span style="color:#bf616a;">touch</span><span> pre-push
</span><span style="color:#96b5b4;">cd</span><span> ..
</span><span style="color:#65737e;"># CAUTION: the -f parameter overwrites destination!
</span><span style="color:#bf616a;">ln -s -f</span><span> ../../hooks/pre-push ./git/hooks/pre-push
</span></code></pre>
<h2 id="wrapping-up">Wrapping up</h2>
<p>By setting this working script as a <code>pre-push</code> git hook, I was able to
update my Google Calendar event automatically together with <code>git push</code>. Now
the advantages are threefold:</p>
<ol>
<li>I can see the slides count straight up</li>
<li>My couch can track my progress</li>
<li>My girlfriend can see if I am meeting my goals or if I need more help</li>
</ol>
<p>The third one is especially helpful, if you have a committed, helpful
girlfriend like I do. It helps us plan out relax time much better this way.</p>
<p>Hopefully, this guide was helpful to you in some way, be it general
knowledge, task automation, git concepts or you are just curious. If you
have any questions, please contact me.</p>
Building on your previous work2020-08-07T00:00:00+00:002020-12-22T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/building-on-your-previous-work/<p>There was quite a long pause between my previous article and this one. I
had a lot of stirr in my personal life, a lot of events happening
concurrently. I could not prepare for a lot of them, because I did not see
them coming. Yet, I am grateful for every single push life is giving to me
because it makes me feel alive much more when I am being pushed or even
held back by the forces outside of my reach than just sitting all day
spending my time doing nothing at all.</p>
<p>I did not promise any time shchedule for the articles I publish yet, mostly
because it is just about 50 days since I have started this yourney and I do
not really know what my audience is, or even if there is any audience to
speak of in the first place.</p>
<p>Yet it does not mean I was slacking all the time not posting any new
article. I have read that the times publishing 500 words articles abe
getting off with it is long gone. I Have not been blogging all that time to
witness such realization, but from the perspective I expirience by reading
other's people work, I can confirm that longer articles with a personal
story are far more engaging. Showing vulnerability is even more added
value. Some people stick to the consistency rule, which I roughly explain
for myself as "doing some task every day".</p>
<p>I am being consistent, but unfortunaltely not in writing. My couch is
helping me a great deal to meet my goals, he himself went through similar
journey few years ago. Writing for me is something I am just starting to be
passionate about, and I am enjoying it more and more, the more I practice.
The freedom to express my thoughts is relieving. There are people who claim
that the article of 10k to 12k words is what is the output that brings
results these days and I do stick with it, even though it takes half a day
for me to produce such a long article.</p>
<p>It is hard for me to meet so many goals at the same time. One article a
week. One server update a week. Four slides for the course a day. Cook and
exercise. Freelance to not lose premium clients. Save all the screencasts
to produce extra content. Keep my operating system up to date. Cram there
enoght time to read the books or articles I have in my reading list.
Spending quality time with my girlfriend as a compensation for her being so
supportive in every possible way. There is a lot of goals and activities I
want to engage myself in but only so little time. I believe you have benn
in the same situation alt least once in your your life.</p>
<p>The thig is however, real productivity, if I could really call this recent
chaotic multitasking as such is inevitably producing results. This is a one
single tip I want to speak to you about today.</p>
<h2 id="hard-work-brings-results">Hard work brings results</h2>
<p>It is true that working smarter is better than working harder, but you have
to try things, and fail a lot to get there. Until you are being comfortable
in your area of expertise, you are just putting hours, days and months of
work in to get there. That is hard work. Hard work inevitably brings
results. They might not be the reslts you are aiming for, but results
nevertheless.</p>
<p>For me, recelnty, my every day is filled with building graphic materials. I
tried to use some open libraries for graphics in the past, and I have
written about it, but there are only generic images. They of course save
time, but if you are trying to find there some content that you can reuse
in a specialised area, chances are you simply do not find an image at all.
Even if you find some paid content, and even though it usually costs around
two Euro, it is not fitting overall style of your work. Everyone simply has
different style, and history has proven us again and again that having
consistent brand image is worth it a a grat deal more than just sticking a
bunch of unrelated stuff together. It is also the case in the images you
are using in your work. If it is a different style, it simply does not fit,
even if it is cheap.</p>
<h2 id="know-your-content">Know your content</h2>
<p>If you came down the path of producing a content, you produce a lot of
...content. And unless someone else is consuming <strong>every</strong> single piece of
it, you are the only person in the world that knows all that content by
heart. It is like taking notes i=from the lecture in school. It just
requires one peek into your notebook to recall whole half a hor of
lecurer's ramble.</p>
<p>I have read about the fact that you can reuse some of your content before,
but since for most of my life so far, I have not been publishing much,
apart from some personal Facebook posts or some random Instagram photos,
that had no coherence.</p>
<p>Todoay I was able to experience this effect <em>in persona</em>. I wanted to go
rollerblading before the sunt went down, but I have not met my critical
tasks for todday and it yet and it was getting late. I needed to draw a few
more illustrations. Fortunately, they needed exactly a little pieces of the
content I have made before, so I could just copy and paste it. That was it,
My larger illustrations was mostly complied of my previous smaller
illustrations. I have met my goals for today, even overshot them a little
and still could move my body by the sunset.</p>
<h2 id="reuse-to-save-time">Reuse to save time</h2>
<p>IIt would not be feasible to draw everything from scratch. I am grateful
for what I do, and that I can do it. Most of what I did in the past was
producing lines of code. I could copy and paste the code as well, but it
fells different. Maybe because I am new in the field, copying the smaller
illustrations into larger one feels completely different that copying parts
of code from smaler projects into a bigger one. Maybe becase I usually
rework most of the copied code to fit the destination one. Or maybe the
code is not visual enough to experience this effect. I do not know. But
copying illustrations is a great relief for me. I hope you can reuse your
older content for a new one and feel this too.</p>
<p>By being consistent and producing daily, I was able to finish some tasks
far sooner than I have expecetd, because of the non-linearity nature of it.
Next time you feel overwhelmed by the amount of content you strive for,
just try to force yourself to do it. Chances are, your mind just comes up
with the content you have made in the past that fits your current work
greatly and saves you a lot of time. I iwsh you to be surpised as me to
expirience this effect on your own, if you hand't yet. Stay creative and
the results are likely to follow.</p>
Three reasons why you should spend time in nature as a programmer2020-07-24T00:00:00+00:002020-12-22T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/three-reasons-spent-time-nature-programmer/<p>Do you spend considerable time staring to the screen? You probably do. You
are doing so right now reading this. If you do some (or mostly) programming
during the time behind a monitor, you might be missing out on some great
sights out there. And not only that.</p>
<p>Realxing in a nature as a programmer can indirectly increase your
productivity. Here is the list of three reasons I have identified are
beneficial for spending time in the nature to boost your programming.</p>
<h2 id="1-relaxing-requires-effort">1. Relaxing requires effort</h2>
<p>In today's productivity obsessed era, you might easily get caught in a
cycle of putting in as much hours as possible to get the job done. But this
approach usually leads to diminishing returns. When you are sleep deprived,
you do more mistakes. I even tend to choose sub-optimal solutions in this
state. There have been countless nights that I went to bed really late,
unable to fall asleep, with my brain still in high revs, because I have
been clobbing up the first solution that was apparently viable when I
started. It is not fun to realize the next day, when I finally manage to
wake up, that it could be done in a faster, yet still efficient manner.
Sometimes experimenting with different solutions is essential, and you
should not hesitate to throw away un-needed code, especially when it is in
VCS, but realizing that you already knew the right solution and did not
choose it simply because your brain was tired is painful.</p>
<p>Relaxing is a form of art - art of not working. Relaxing can overlap with
procrastination, and this might be the reason many people neglect it. But
for me, there is a difference. When I catch myself procrastinating, it is
usually something connected with low efort, high reward activity. Watching
a YouTube video with a Warcraft 3 match. Eating a youghurt with added
sugars, even though I am not hungry. Reading endless comments below
Hackaday's posts. You name it. Relaxing, on the other hand requires some
effort from me. Apart from a few unindetified plants that could be present
in the space I usually work in, I need to make effort to get into the
nature. I currently live in the city. If the time is short, I can choose a
bench in a park to sit for a while. The effort required is usually walking
there. To get deeper into the nature, I am required to ride a bike or drive
and hike to the place, all of which are considerably harder to pull off
than just opening a web page. Thus, for me, relax requires effort,
procrastination not. Identifying when you relax is the first step to
actually relaxing. Relaxation helps you rest. Procrastination usually not.</p>
<h2 id="2-50-shades-of-green-for-your-eyes">2. 50 shades of Green for your eyes</h2>
<p>Nature, especially when blooming during vegetation, provides a pletheora of
colors to look at. Yes, your newest screen might advertise a support for
zillion and one colors, and it might even be true, depending on how you
measure it. But there is a difference in looking at all the colors on your
monitor and outside there. First, as a programmer you usually choose colors
that helps differentiate text from the background, and also to give the
words you read in the code some context, for instance <em>reserved words</em> in a
programming language of your choice are different color than <em>variables</em>
are. You might even have your favorite theme already, for instance
Solarized, GruvBox or even a less known, beautiful
<a href="https://github.com/arcticicestudio/nord">Nord</a> color palette, that you can
readily use in your next project. No matter which palette you use, most of
the time there is a discrete number of colors used. This coloring patterns
are helpful for work, but this is however not what the eye evolved for.</p>
<p>In fact, the human eye, as everything alive, evolved to being the best fit
for sustainability in a given environment. The environment for the eye is
shaped by the sunlight. Specifically the portion of it, that we call
<em>visible light</em>. This specific portion of the electromagnetic spectrum is
the band of the wavelengths, that can pass through water. Yes, even though
we do not longer need to see underwater to survive, the trait remains in
our genome. Now most of the plants appear green to our eyes. This is
because chlorophyll requires wavelengths around red and blue to break bonds
in molecules to do its thing (creating energy for the plant). Chlorophyll
thus does not absorb green wavelengths, so it reflects them to our eyes.
Now the wavelenths in the sunlight peak around the green color, which is
near the middle of the spectrum. Since it is in the middle, also the
sensitivity of the three types of receptors in our eyes overlaps mostly in
the middle. This gives us the ability to distinguish more shades of green
color than any other color. Combining the fact that the nature reflects a
lot of shades of green color with the fact that our eyes can distinguish
this shades the best, this provides some nice scenery that you can
experience right ahead. Yeah, I am advising you to go outside and stare at
leaves. Look at the individual plants and trees in the distance and relax.
I find this to be the best relief for the eyes, and it is free. Remember
this next time you seek some interruption from work.</p>
<h2 id="3-keeping-your-veins-healthy">3. Keeping your veins healthy</h2>
<p>As I have mentioned earlier, getting out, off the desk, requires effort.
Effort in a form of moving your body. Maybe you are aware of the fact, that
your brain needs a steady flow of oxygen to provide results. Well for this
to work properly, you need a healthy cardio-vascular system. If your veins
are clogged, it becomes harder for blood to move around. Well, if you
regularly visit your favorite places in nature by riding a bike, walking or
hiking, it helps maintain your healthy veins, or even make them more
efficient. Well, the body adapts to a lot of stimuli. I am no specialist in
this topic and I believe, that if sitting at desk all the time, just using
your brain and fingers is all you do, the veins that support your brain are
still completely healthy by their daily usage.</p>
<p>There is however still a good reason to have a healthy cardio-vascular
system to the other organs as well. For one, it helps you live longer,
which is, if you are sporting a good enough life an important reason. When
I did not keep my body in good shape, it had a lot of other bad side
effects, namely obesity. Obesity made my low self-esteem even lower. This
made me further believe less in my soft-skills, maybe passing on
opportunities that I could have easily handle. The day when I started
seeing myself as fit and healthy, my perception of the world has
drastically changed, for the better. And being a great at programming
requires a lot of other skills, confidence being one. Confidence is
required for mostly everything for delivering outstanding results. In
programming, if you can confidently persuade your client or your boss that
what you are proposing will work, it may let you work on that proposal, or
keep that job in the frist place. Becoming fit and healthy was the biggest
boost in my overall confidence I can think of in my life, and I had become
fit by a lot of riding, walking and hiking. If you are serious about
programming, consider moving your body regularly, if not already doing so.</p>
<h2 id="bonus-nature-can-provide-unexpected-solutions">Bonus: Nature can provide unexpected solutions</h2>
<p>Sometimes it happens that you work on some hard task and you cannot find a
solution anywhere. You stop, go doing something else, for instance you take
a walk. As you walk you stumble across ants moving a piece of food here and
there. You may not realize it right away, but your mind can bring you back
into the <em>working</em> state and you get a long needed hint, that can bring you
closer to the solution you are after. There is a lot of engineering that is
inspired by the solutions that nature itself came up with. Your solution
for instance might imitate how ants pass pieces of building material among
them.</p>
<p>I know, working when you are trying to relax is not a good way to relax.
This is a bonus point, because it does not really fit into the <em>relaxing</em>
in the nature topic. It does however relate to nature and programming. The
three main points I have presented are all about relaxing, about spending
time not working to deliver better results later. Relaxing more deeply,
<em>meditating</em>, having no thoughts is a delicate art, only few master
properly and one I myself still struggle with, so for now, relaxing while
having thoughts is sufficient for me.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Relaxing in nature as a programmer brings at least three positive effects.
I do make use of all of them regularly. You can start doing all three right
away:</p>
<ol>
<li>Relaxing helps identify procrastination, because it requires effort</li>
<li>Color that plants reflect helps relax my eyes</li>
<li>Getting to the place to relax mainstains my body healthy</li>
</ol>
<p>Apart from that, changing an environment you spend time in for a brief
period of time can help help lead you towards an unexpected solutions, or
you can even find inspiration in the nature itself. The next time you feel
like not working just go outside and relax.</p>
How enjoying the moment made me a positive person2020-07-12T00:00:00+00:002020-12-22T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/how-enjoying-moment-made-me-positive-person/<p>Have you ever got into a situation where you did not want to end up because
some part of your plan failed up miserably? Chances are that you did, and
not once. In fact, probably every time you have prepared a detailed plan
for a day, it ended up not fulfilling completely. The higher level of
detail you have introduced to your plan, the more the plan has deviated off
the reality afterwards.</p>
<p>If you strive to build something, you probably plan your steps daily. I was
able to to observe, that the effect of this plan deviation tends to get
lower the longer the time period it spans and this makes a complete sense
to me. Even though I am able to make a perfect plan down to hours or even
minutes for the next day, I am not able to create such a detailed plan for
a five year time span. My plan for a year usually has just a few points and
a plan for a five year always contains just one ultimate goal.</p>
<p>Yesterday I have identified one trait that helps me keep the right attitude
towards life, when things do not go as planned. I had a plan for a whole
day. It included multiple people and multiple places, but it also included
some aspects that were out of my control.</p>
<h2 id="the-plan">The plan</h2>
<p>Together with my girlfriend we went to a friend's cottage to meet some old
friends in a car that I have borrowed from my father. Thus, let me present
the glorious wonder that I have composed in the morning before. A master
plan in which every single piece fits together like a puzzle. Just kidding,
it is just an ordered list of the tasks for the next day:</p>
<ol>
<li>Wake up at 06:00</li>
<li>Do 50 pull-ups</li>
<li>Prepare breakfast</li>
<li>Leave the cottage at 08:00</li>
<li>Leave the car parked near the bus station, where my father would pick it
Up</li>
<li>Take the bus to the major city at 09:00 and there taking another bus to
The small town, arriving at 11:20</li>
<li>Wait there till 11:50 for another bus to take us to the destination at
12:57</li>
<li>Be picked up with the relative at the station to have a family lunch
Around 13:00</li>
<li>Spend some time with the family</li>
<li>Borrow a car from that relative and leave at 16:00</li>
<li>Enjoy the event in the city at 19:00</li>
<li>Sleep in a bed at home at 22:00</li>
</ol>
<p>I had this list prepared before we came to the party, so we could focus on
the activities with the company during the evening there. As you can see,
the next day there was no much time left, except for the morning. I was not
sure that someone else would be up so early in the morning anyway, so I did
not count on it.</p>
<p>It was a long, hot summer evening. When every piece of meal was either
eaten or burnt, and most of the folks decided to go to bed, we two did the
same, leaving but the most persistent guys around the fire. I did check
that my alarm was set up, brushed my teeth and laid down, prepared for the
next day.</p>
<h2 id="the-execution">The execution</h2>
<p>The list is pretty simple and straightforward, don't you think? I did. How
many of the points there do you think went out as planned? Just a few to be
honest. Some came pretty close but either time or the place was different
for every single point in the list. Here is the breakdown of the events:</p>
<ol>
<li>
<p>Alarm did wake me up. To my amazement, it was not a generic Motorola
alarm playing from my phone, the sun was not visible, as it should be at
06:00 this time of the year and my girlfriend seemed too scared. No, the
time was 03:33, we were still in the cottage and the alarm was a radio
receiver / MP3 player in the next room playing Englishman in New York at
full volume.</p>
<p>Completely asleep, I just turned the volume down to the lowest, calmed
down my girlfriend and comforted myself under the blankets, falling
asleep soon after.</p>
<p>Another alarm. My phone was not making the sound again. The time was
04:33. That damn thing next room was on the full volume again. I still
do not know if it was a joke of some kind or someone visiting before us
really needed to make sure everyone would wake up. This time I had to
switch on the lights to end this nonsense once for all. I have unplugged
the device shaped like a large brick off the wall. Strangely, it had an
Ethernet port on the back. Why on Earth does an MP3 player in the
cottage somewhere in the woods needs the LAN connection? I was too
asleep still to investigate. Maybe it has an UPNP / DLNA functionality,
so Kodi can stream music there...</p>
<p>I woke up when the sun shining through the windows touched my face. My
phone was showing the time 07:13. I must had turned off the alarm during
the sleep. Well, this is going to be tight, I thought.</p>
</li>
<li>
<p>The pull-ups were skipped immediately. No time for that. I could still
prepare the breakfast in time with some corner-cutting.</p>
</li>
<li>
<p>It took me some time to find any kind of fat for cooking. As I
suspected, everyone was still sleeping. The only thing I could find
without waking anyone up was almost empty bottle of vegetable oil in the
next cottage, but it served. Yet another alarm. 07:30, girlfriend's
phone. I waked her up gently, with the smell of the eggs my father gave
me as a gift before we left. These smelled really good. They were laid
by some four chicken he is keeping in the yard himself. You won't buy
them in the supermarket.</p>
<p>The coffee making machine had it's fair share of surprises as well. I am
always amazed by the variety of machines that can make coffee. As is the
case most of the time, I wasn't familiar with this machine either. It
took me a few minutes to figure out how to fill up the water tank, how
to insert the capsule in and how to start the coffee brewing process
itself, even though it only had <em>one</em> button. It took too long till I
finally realized that I had to push the same button again to stop the
process only when I had a third ever larger cup filled with a white
creamy liquid. I did expect an actual coffee to appear at some point,
but it didn't. Just to be sure, I quickly took the Latte Machiato
capsule apart. There was no brown substance there. I should have rather
try to find out about that LAN port on the radio.</p>
<p>I gave the first cup of the coffee to my GF, which I thought had most of
the contents that capsule contained and hoped for the best. She made
some comments about the it's color. Apparently, I was not alone to
expect coffee to not look snow white. Overall, the breakfast was a
success anyway, expect that it ended at 08:10.</p>
</li>
<li>
<p>We still had plenty of time to get to the bus station, I thought. Not
too quick. The last visitor that came blocked the entrance road with his
car. Luckily, he was sleeping in the hallway in his own sleeping bag,
keys of the car beside him. I should not just taken someone else's car
and ridden it without a question. I wanted to catch that bus, so I just
swapped the cars and gave the keys back where I have picked them.</p>
</li>
<li>
<p>We have arrived at the station at 08:42, father not being there. This is
not a kind of country where you can leave your property unchecked
wherever you please, being sure you find it at the same place in the
same condition every time. The make the matters worse, the station was
frequent with people notorious by borrowing the stuff, the same manner I
did half hour earlier, with one exception: they generally do not
politely return it back. I believe there is a point in the future
Slovakia will not be known by this kind of behavior anymore, but for now
it sadly still holds true.</p>
<p>Should have I left the keys inside the car hoping father gets there
sooner than some curious, unwelcome visitor? And what bus station
platform were we supposed to wait on? I did not have time to really
think. It was not a large bus station, but still, there are 14 platforms
and you can In fact miss the bus not standing on the right platform. Not
easy, but possible. Fortunately, father came in time and GF found out
which platform to use.</p>
</li>
<li>
<p>The bus was new, having magnificent details, such as matched curtains of
alternating colors, head rest made of leather or the point lights in the
ceiling one can use to read that resembled sport's car headlights. This
all was a pleasing compensation for a slightly higher price the bus
tickets cost us. We have left the station at 09:07 instead of 09:00. It
would be no problem, in the major city we were headed, we would wait for
25 minutes till the next bus we wanted to take would left. 7 minutes
delay was not a problem at all.</p>
<p>The problem was, that the day before we have learned that the road we
are going to take was half-destroyed less than two weeks before. Heavy
rain caused a nearby river to go wild, taking human made objects as well
as small rodents succumbing to inevitable death with it, without asking
for a permission. Even before I knew this, during the planing phase, I
was considering an alternative route that had one less hop, but relied
on buses being on time to the minute. This route offered us comfort of
this shiny new bus we were sitting in for full two hours, not just some
20 minutes.</p>
<p>We took the risk of not being able to catch the connecting bus, when
this one is delayed for too long. For this purpose, I have even emailed
a bus dispatcher responsible for the connecting bus on this alternate
route with a gentle request to wait for us. He had replied, that the bus
driver was informed and he would make his best, promising some 4 minutes
wait.</p>
<p>We were already decided that we are taking that alternative route. I
could not find reliable information if the route we wanted to take in
the first place was currently even available. If so, it probably used
some other roads, going right through the heart of the national park.
The bus driver would need to pull off some nice zig-zagging maneuvers to
get us through, considering how hard was to navigate the now destroyed
road, even when it was functional.</p>
<p>Remember that 7 minutes delay already? I was nervous. This bus was a
long distance one. They have a planned stop for a 15 minutes in some
major cities it goes through, this one being one of it. Passengers can
use this stop to refresh themselves. The driver can also use it to reset
the delay that accumulated since last such stop. Although the driver
removed the delay even before we got the station, and left the station
exactly on time he was supposed to, knowing that when such a timetable
shift happens near the station we were supposed to take the connecting
bus, the 4 minutes we were promised would not save us.</p>
<p>Long story short, we got to the station at 11:12. This bus, as well as
the connecting bus should leave this city's station exactly at 11:00. We
did wonder if the bus driver waited for us. I felt a little pity for the
people in that bus who were spending more time sitting there, not
knowing why the bus is not moving. We may never find out.</p>
</li>
<li>
<p>Suddenly, we were in the city we did not really wanted to be, knowing
there is not another connection for another two hours. All other plans
we had were now broken.</p>
<p>But it was not end of the world, not at all. It was around lunch time,
the sun was shining, there was just enough wind to be refreshing. We had
still have enough food and water with us. Two nature loving, backpacker
spirits as we are, without much thinking we headed into the nearby park,
marked in the phone maps.</p>
<p>To our great surprise, it had not one, but two different sets of workout
grounds, as well as free-of-charge toilet booths. These booths are
commonly present in the construction sites or on summer festivals, yet
here they were in the small park. It also had a playground for a
children and a driving school simulation for the kids as well.</p>
<p>I took a few moments to look around just to realize my girlfriend is
already inspecting one of the workout fields. Somehow we have both
packed swimsuits, so we dressed it up to not make our clothes all
sweaty, tuned some music on a reliable JBL GO and got to work.</p>
<p>Exercising in the noon is exhausting. But I had my 50 pull-ups I could
not do in the morning finished before the lunch and I was happy. We
still had plenty of time left. Already in the swimsuits, we dived
straight in the river flowing through the city. The stream was strong
and it was crystal clear, with a spring somewhere in the mountains
surrounding the city. The locals looked upon us with a strange
expression on their face, but we both felt great. It was so relieving.
If it was a life hazard, we are about to find out.</p>
<p>This is the point where I had an insight, which led me to writing this
post down. <strong>I will now focus on the good little details the life
provides even in the bad situation.</strong> Even though our plans were further
disrupted down the road, out mood stayed positive. I had a call with a
father, who was curious if we did catch that bus. I have explained him
that fortunately, we did not. He reminded to me similar situations he
went through, and that he tried to pass this winner attitude to me all
the time, with me realizing just now. He is my hero!</p>
</li>
</ol>
<p>I will omit all the details from the rest of that day. Yeah, we were not
picked up in time to have a family lunch. It was more like a family dinner.
We also did not attend the planned event in the evening. We were trying to
get there to at least show up, but it was already getting late. Heavy rain
together with strong wind swayed us to to a small village somewhere along
the road, as we were both really tired. We did got to bed at around 21:45,
thus falling asleep around 22:00. Yeah, just as planned, yet the place was
different.</p>
<h2 id="wrapping-up">Wrapping up</h2>
<p>This is a short story describing how I learned to become more positive in
my life. This is not an exceptional story. It might not be that
inspirational to you either. Yet it might be relatable to you. If you ever
feel that nothing is going according to plan, you might just be too hard on
yourself. Give yourself a little space to enjoy the moment, and you may
realize that failing to hit all the small little daily goals does not mean
you are not on the track. The circumstances change all the time. People can
behave irresponsibly. Technology can became unresponsive. Whatever happens,
focus more on your long-term goals. They provide greater picture. Overall,
if you note your small wins and check them against your long-term goals, it
can surely give you a needed boost when the times are bleaky. Stay focused
on the positives and the negatives will suddenly become less of an
obstacle. With this attitude, even the most ordinary people can achieve
great results. Stay positive, enjoy the moment.</p>
How your commit history tells you when your post was published2020-07-10T00:00:00+00:002020-12-22T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/how-commit-history-tells-when-post-published/<p>Have you ever wondered about the sheer amount of problems you encounter
daily? Some of them disappear over time. Some of them may be ignored. Some
of them might be delegated to someone else, if you can afford it. If none
of these options are available, your time and your attention is required.
Usually, these are kind of problems that you need to solve personally to
move further in what you are doing.</p>
<p>If you are for instance mowing a lawn with your new shiny weed-eater and
you run out of fuel, in order to finish it, you must refill the tank. If
you are painting a bench and someone sits on in during your break, well,
you have to find a way to remove the freshly created artwork from the
paint. Hopefully the one who was responsible for the two rounded shapes you
are left with is not siting there still, otherwise you have multiple
problems to solve. In my recent quest to own a blog filled with glorious
posts, I have also received my fair share of interested problems to solve.</p>
<h2 id="sapper">Sapper</h2>
<p>About half a year ago, I have discovered <a href="https://svelte.dev/">Svelte</a> and
immediately felt in love with it. If you haven't heard about it yet, it yet
another javascript framework. This one falls into the category of
<em>disappearing</em> frameworks. Disappearing framework <em>disappears</em> before
deployment, because there is no virtual DOM. You are left with a lean
bundle of code that modifies the DOM with just enogh lines to work. It's
main competitors in the field are of course React, Angular and Vue, but it
has it's own set of advantages and disadvantages over them.</p>
<p>These three giants also have their extended cousins that enable more useful
features like SSR. For React it is Next, for Vue it is Nuxt to name the
most relevant. Similarly, Svelte offers a solution that is called
<a href="https://sapper.svelte.dev/">Sapper</a>. Sapper is heavily inspired by the
former two, so if you have ever used them before, you will be familiar.</p>
<p>As you already suspect by the hints, this blog is also powered by Sapper.
Interestingly, it is pre built in a way that complies with another trend in
the industry I have been able to identify, that is called JAMstack. This
name stands for Javascript-APIs-Markdown-stack. With sapper, you can also
deploy JAMstack apps without much effort, although I am not sure if there
is any mention about it in the docs. Hopefully, you will see the connection
too, once you are familiar with both.</p>
<p>Well, this is not a JAMstack in a complete sense yet, because at a time of
writing I still do not use any API here, but bear with me. The javascript +
markdown combo is present. Specifically, the posts itself are all stored as
a file with the <code>.md</code> extension. My motivation for choosing this approach
is twofold. Firstly, it requires almost no overhead to set up. There is no
database yet. And secondly, you simply open your editor of choice and
write. In my case it is <code>vim</code> which I use for almost every kind of text.
When you save the file, it is done. No <code><textarea></code> or copy-and-paste
reqired, as would be the case with other web based solutions.</p>
<p>But this setup, as efficient as it is for me, also has some significant
drawbacks. Since Sapper is not specifically targeted at blog creators, it
has no built-in functionality for this task. Every feature you need, you
have to bake in yourself. There are no fancy plugins, like for instance
WordPress would offer. Even though I have used PHP in the past extensively,
I have for some reason did not end up deploying WordPress too heavily. It
was maybe two or three times at max, for some side projects, that are not
longear alive anyway. I have decided for a small incremental changes to
mold the final product to become what I envision in the future.</p>
<h2 id="sorting-the-posts-by-date">Sorting the posts by date</h2>
<p>Over the time I have come to the conclusion that deployed solutions
generally falls to the two categories: monolithic and modular. Monolithic
solutions tend to promote rich set of features at the cost of harder
customizability. Modular solutions promote monimalism and interoperability
at the cost of overhead needed to set them up.</p>
<p>As I have already mentioned, with Sapper being my blog driver, I believe I
have chosen the latter. One of the first problems that I needed to address
to move further is sorting the posts by the date when they were first
published. Currently, since they are just a files, they are sorted by their
name, which is definitely not optimal for a blog. At first I did not
notice, because the posts did by a pure chance sorted itself mostly right,
but as they started to pile up, it became obvious to me that I need to fix
it.</p>
<p>Sorting the posts that are queried from the database tend to be easy.
However, the whole blog is stored in a git repository. This means, that all
the dates of creation and modification is stored in it as well.</p>
<p>The proof of concept with sources and more technical details can be found
in the
<a href="https://github.com/peterbabic/sources-peterbabic.dev/tree/master/how-commit-history-tells-when-post-published">repository</a>.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Learning by doing technique sometimes nudges you towards the situations
that you do not expect beforehand. And I mean, this happens more often than
not for me. I have demonstrated that you can reliably provide your blog
posts with created (published) date along with updated (modified) date
utilizing the git commit history.</p>
<p>Stay tuned to find out, If I am able to go databaseless with this blog. If
you have any comments or thoughts, please let me know.</p>
<p>Relevant sources are available in the
<a href="https://github.com/peterbabic/sources-peterbabic.dev/tree/master/how-commit-history-tells-when-post-published">repository</a></p>
You support open-source without knowing it2020-06-30T00:00:00+00:002020-12-22T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/you-support-open-source-without-knowing-it/<p>In a recent <a href="/blog/becoming-better-presentation-creator/">article</a> I have
explained how you can reduce the time needed to download pictures for your
content by eleminitaing page load times from three to just one. In that
article I have mentioned that I am currently creating presentations in the
LibreOffice Impress, namely because it is a free open-source software.</p>
<p>Depending on how you look at it, one could say that just by merely using
the open-source software, you are supporting it. I would say that, this
reasoning is not too far-fetched, and I would like to explain why.</p>
<p>Let's consider a situation where you use, for instance said Impress to
create a file. In case you do <em>not</em> run the suite off the USB in some live
environment residing purely in RAM, you probably run it off some writable
non-volatile memory, for instance off SSD. Running programs in such
environments is very common. Your device probably also contains a hard disk
where operating system you use to read this post right now is loaded from.
Even if you have some kind of cloud attached device, which just displays
the output, chances are that the remote device contains it's own hard disk.</p>
<p>It was however not always the case. In the past, there was no such thing as
a non-volatile electronic memory. But as side note, back then also usually
all the software was open-source, since the most common way to distribute
it was on a paper, either as a punch card or later in a text form.</p>
<p>Thus, for the sake of the argument, let's assume, that all the programs we
are running can write to disk and what they write, stays there arbitrarily
long time. Except for the age of computer infancy and a current age of live
USB environments without persistency enabled, I cannot think right now
about any other similar cases, but for know, these two are enough.</p>
<h2 id="subtle-traces">Subtle traces</h2>
<p>Now, LibreOffice is doing some work in the backround. One of the things it
does that falls of the category of non-obvious is related to recovering
files in case of an error, making it harder for you to lose your work. Look
for <code>backup and temporary files</code> in the settings or in web search. This
functioncality is nothing sort of uncommon these days, and not having it
built in in the software in one way or another is becoming more an
exception than the rule. Vi editor family does it in the form of
<code>swap files</code>. Surely you can find more examples in whatever software you
are using to do work easily.</p>
<p>Even when you do not pres the <code>Save</code> button or a hotkey that does it
manually, modern software does this for you, automatically. It usually does
it by default and usually without asking for a permission. A lot of times
it does it without you ever knowing, not showing the icon or a notification
of any kind, hiding away unimportant details. The swap file mentioned
earlier is usually a hidden file itself, for instance, which is in my
experience one of the more common conventions. Don't get me wrong - I love
when the software does this! It helps me keep my mental health and sanity
when using technology, because techonology might fail for whatever reason.
You can always replace a piece of technology with a newer one or a better
one, but you cannot easily mitigate the damage done by time spent on a work
that did not end up saved.</p>
<p>These subtle traces are precisely the core of my argument that I am trying
to express here - that by merely using a software to do work, you are
supporting it, maybe without even knowing. Since what matters to me in this
post is a free open-source software specifially, we will stick just to this
side of a coin, but obviously it applies to all work related software. You
see, when a software you are using creates such hiden artifacts on the
disk, someone else not familiar with it might start poking around them and
thus learning about the original software that created them, possibly
becoming a supporter or even a contributor in a process as well. This can
happen even after the software was removed from the machine.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Take this example with a grain of salt. I am not trying to convince you
about something using what could be also described as a most unrealistic
scenario possible. I did also made a distinction between the terms
<code>supporter</code> and <code>contributor</code>. I consider myself to be a contributor when I
do contribute consciously to the project. This could be by creating an
issue in the issue tracker, opting-in for a usage data statistics, updating
the documentation or submitting a pull-request with an actual code.</p>
<p>By just learning about the software project and spreading the word, you
also become a supporter, albeit a conscious one. This post explains how you
might simultaneously become a a supporter in an instant when you become a
<code>user</code>.</p>
<p>If you have any thoughts about this that support, or even better, challenge
my view, do not hesitate to let me know. The bottom line is, do not fear
using open-source software even if it scares you, because you might even do
more good than a harm, without even knowing!</p>
Becoming a faster presentation creator2020-06-28T00:00:00+00:002020-12-22T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/becoming-better-presentation-creator/<p>What is the cost of doing a task that always take roughly same amount of
time over and over? Well, on the one hand, if it is your job, and you do it
right, you might even end up being promoted. But even then, it might be
reasonable to think about speeding up the task. I know, sometimes it is not
entirely possible to change the way you do a job you have landed, for
instance in a very strictly controlled work environment, but in my
experience such environments are becoming less common. On the other hand,
if your pay depends on the number of times you complete that task, and you
do not do it the fastest way possible, you might end up being paid less for
your time spent.</p>
<p>If this seems obvious to you, it is alright. A of of people understand
this. What might not be obvious however, is the way you optimize the task.</p>
<h2 id="creating-presentations">Creating presentations</h2>
<p>Lately, I have been creating a bunch of presentations. When I have the
option, I choose free open source software to do work. I have heard many
proponents of the open source say, that the one of its advantages is the
community. Let me agree with this opinion by describing my latest
experience.</p>
<p>I have started making the presentation by adding some pictures to it. If
you do a presentation that you only show to your colleagues once and then
it starts catching dust never to be opened again by anyone, you might use
any images you find and do away with it. When the presentation has a
potential to be seen by a greater audience, you might start considering the
intellectual property. Yes, you can draw the picture yourself, but the peak
of my hand drawn artistic output is drawing a pig with a simple geometric
shapes or maybe a house without touching the paper twice (this is a trick I
have learned in a kindergarten).</p>
<p>If you you are anything like me, chances are, drawing beautiful images is
not among your greatest strengths either, and me, you or simply anyone one
should be focusing on their strengths. I went for an easier option -
sourcing the images online. There is this site called Wikipedia that points
to millions of ready to use images, that are hosted on Wikimedia.</p>
<p>It did not take long for me to realize, there are two problems with this
approach:</p>
<ol>
<li>You have to study the license</li>
<li>The images are rasterized</li>
</ol>
<h2 id="attributing-the-author">Attributing the author</h2>
<p>Yes, generally speaking, it is true that most of the images on Wikimedia
are free to use. The problem is, most of them require you to attribute the
author, unless you have a really terrific lawyer and you are dying to pay
him big time. In other words, you do not need to pay the author, but you
are required to at least give him a credit for his work. If I ever needed
to choose between paying the lawyer and paying the author, I would
definitely not think about the lawyer, but your mileage can vary. However,
since you are reading this, chances are we are on the same boat and you
would also rather like to support the author. The problem I have
encountered is, that there is a lot of licenses and there is no single
agreed-upon standard about how the attribution should be done. This is even
more complicated by the fact, that there exist more types of media (text,
video, audio or combinations of them), which, especially the combinations,
require different approaches.</p>
<p>The two most common ways of attributing the author are seems to be putting
the credit at the end of your video or presentation for every image you
have used, or to put it right below every single image. I will not go into
detail about which one is better or worse, both of them have advantages and
disadvantages. You can for instance read more
<a href="https://www.impactpresentations.com.au/display-image-attribution-presentation-slides/">here</a>.</p>
<p>After studying some licenses, such as a popular
<a href="http://www.gnu.org/licenses/fdl-1.3.html">GFDL</a>, I have concluded that the
credit would not fit below the image, as most of the time it requires more
information than just an author's name to be put there for the attribution
to be fully complying, most notably link to the original work. I have seen
some content creators on YouTube to use the hybrid approach: insert image
author's name right below of it and later link into the video description
and it seems to work well, which is what I intended to do with my
presentation as well.</p>
<h2 id="resizing-the-images">Resizing the images</h2>
<p>The first three images on my very first real slide had a really pretty
author's names below them, with the links and other details precisely
stored in a separate file for later. I thought that it would also be good
for version control and searching.</p>
<p>But it did not take me long to be overwhelmed by this tactic, because it is
a tedious and laborious process. There is also another problem that I have
already mentioned, and it is the fact that you cannot easily scale the
images up. I am sure that you already know how bad does it look, when your
creation contains pixelated images. It sort of ruins the whole work, even
if it is just a single one image.</p>
<p>I wasn't in a perfect situation. I could found the perfect picture I needed
pretty easily but I had to keep track of the licensing information and also
could not simply resize it. Another thing that came to my mind was that all
the other aspects of the presentation are in a vector format, for instance
chosen font or the things like bullets in a lists. They can be resized
without loss. I do not know what devices my audience uses. But I thought it
would be nice if they could resize it without a thought.</p>
<h2 id="killing-two-birds-with-one-shot">Killing two birds with one shot</h2>
<p>Searching the internet I have found a glorious little project called
<a href="https://openclipart.org">Openclipart</a>. And by little I mean gigantic! It
is a gallery that contains thousands of vectorized images. What is even
better, all of them are free for commercial use also. This means no more
attribution needed. The project is alive since 2004, but it was down
recently, due to DoS attack for at least a year. It is kind of a
coincidence that is is working again just as I need it, because as I have
later found, it went up again fairly recently, only a month ago at the time
of writing. Let me get this straight - I was hooked up immediately! Most of
the images I need I can easily find there. Vectorized and with the most
permissive license.</p>
<p>Immediately I have streamlined the process of obtaining the images. The
very first step was to add the search query into the browser. If your
workflow is browser oriented, it really saves a ton of time. Using short
keywords for search is also helping to save keystrokes. I had the letter
<code>o</code> already used by something else, so I went for <code>oc</code>. This way, if you
are already in the browser URL bar, you can just write <code>oc cat</code>, hit enter
and you are brought to yet another internet page containing pictures of
cats. These cats are however vectorized, but I believe it serves the
Internet's purpose. It definitely does for me.</p>
<p>I know, there are even faster ways to get to the search results of a page
of your liking, but they could depend on your desktop environment. If you
ever wandered what are all the features KDE has that you had never tried,
using <a href="https://userbase.kde.org/Plasma/Krunner/en">KRunner</a> to directly
invoke search might be among them.</p>
<p>As I have already lightly hinted, my office suite of voice was LibreOffice,
making presentations with Impress. It has a Gallery that contains the
images you can use for whatever you want. In fact, this is precisely how I
found the Openclipart in the first place - trying to find out what license
are LibreOffice gallery images published under.</p>
<p>There is also an LibreOffice
<a href="https://extensions.libreoffice.org/en/extensions/show/openclipart-org-integration">extension</a>
and <a href="http://www.youtube.com/watch?v=UJ4WLATXE4M">video</a> for Openclipart. My
heart was in an awe! Sadly, it was not updated since 2016 and does not work
with recent versions of LibreOffice. If you install it, it only shows up in
LibreOffice Writer, not in Impress. Even there, it always exactly zero
images as a search result. I did not study why, and went on improving my
browser centered workflow experience.</p>
<h2 id="my-first-browser-extension">My first browser extension</h2>
<p>Downloading the images to the folder was easy. Search the images, find the
one that fits your needs, click it, wait for image details page load, click
download as SVG. Could you improve the process like this? I am pretty sure
you could. There are dozens of ways. The one I settled for was to eliminate
the required page load once more. Right clicking on the image and choosing
a <code>Save</code> option does reduce the unnecessary page loads, but it does
download the rasterized image, which is not what I wanted. But this process
hinted me to the right direction - adding another context menu item, that
would download the vectorized image in SVG format straight ahead.</p>
<p>Never creating an extension for the browser before, I started with the
examples from the documentation
<a href="https://developer.chrome.com/extensions/samples#search:contextmenus">examples</a>
and it turned out, that one did almost exactly what I needed, so I just
modified it and worked with 10 lines of code.</p>
<p>This solution was almost perfect. There was but one caveat: I had to choose
between saving straight into my Downloads folder, thus saving one more
click on the <code>Save as</code> dialog, or utilizing said dialog to keep my last
save location. Not wanting to move files around once saved, I have settled
for the latter. You can find more details in the extension
<a href="https://github.com/peterbabic/openclipart-svg">repository</a>.</p>
<h2 id="conclusion">Conclusion</h2>
<p>For me, this was a nice experience. Reducing page number of page load times
from three to just one for a repetitive task can be a significant time
saver. The more time it takes for the page to load and the higher the
frequency of the task, the more time wasted. In fact, they are in
multiplicative relation, so the time saving is exponential. Do the math
yourself.</p>
<p>I could streamline the process even more, for instance reducing the time to
get to the browser to start the search. As I have noted, this solution
might depend on other factors, such as the chosen desktop environment or a
particular window manager. If you do a distro-hopping from time to time, as
many people, including me do, to fish for features they did not know about
to incorporate them into their workflow, it might not be easily
transferable.</p>
<p>Getting work done sometimes brings curious situations. I hope you could
find yourself in this story, or have learned something. The bottom line is
that you should not be afraid of trying new things, such as building a
browser extension to solve your problem. There might be no one else in the
entire world who will find your solution useful, but you can never know
beforehand.</p>
<h2 id="links">Links</h2>
<ul>
<li><a href="https://lizenzhinweisgenerator.de/">https://lizenzhinweisgenerator.de/</a></li>
<li><a href="https://en.wikiversity.org/wiki/The_GFDL_and_you">https://en.wikiversity.org/wiki/The_GFDL_and_you</a></li>
<li><a href="http://www.gnu.org/licenses/fdl-1.3.html">http://www.gnu.org/licenses/fdl-1.3.html</a></li>
<li><a href="https://commons.wikimedia.org/wiki/Commons:Reusing_content_outside_Wikimedia">https://commons.wikimedia.org/wiki/Commons:Reusing_content_outside_Wikimedia</a></li>
<li><a href="https://www.impactpresentations.com.au/display-image-attribution-presentation-slides/">https://www.impactpresentations.com.au/display-image-attribution-presentation-slides/</a></li>
</ul>
How to migrate BitBucket repositories to Gitea2020-06-18T00:00:00+00:002021-01-25T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/how-to-migrate-bitbucket-to-gitea/<p>Have you ever wondered what is the reason people run self-hosted solutions,
even though they copy or imitate services or tools thay are battle
hardened, proven to work well and mostly free?</p>
<p>There is something really powerful behind such endeavour. It always takes
some time to set up. It does not have that much users to created
interactions. It may even cost you money to run. Sometimes after you think
that "I have finally made it" you discover that there are unresolved
<a href="https://github.com/go-gitea/gitea/issues/3658#">issues</a> that might really
cross your path.</p>
<p>When you finally make it run, another question arises: what do you put
inside?</p>
<h2 id="moving-out">Moving out!</h2>
<p>In this article I will explain a simple process about how to migrate
existing BitBucket repositories to the gitea server.</p>
<p>Before we begin, make sure that you set up gitea in the right way,
otherwise it wil get in your way and slow you down.</p>
<p>The functionality I am reffereing to the "Push to create repository". This
feature is also present in
<a href="https://docs.gitlab.com/ee/gitlab-basics/create-project.html#push-to-create-a-new-project">GitLab</a>,
and if I am not mistaken, it landed it there first and I find is super
usfeul, especially for migrating. You just add origin to the local repo and
push it. No more need for page loading, signing in or clicking!</p>
<p>Also note that I could not find this feature in BitBucket, so the reverse
process will not probably not work the seme way, and I am not sure about
GitHub either.</p>
<p>The way to enable it in gitea is to walk straight into your <code>app.ini</code> and
edit everything furiously. I am just kidding. Just add or edit the
<code>[repository]</code> section. The config variables that you need are either
<code>ENABLE_PUSH_CREATE_USER</code> and / or <code>ENABLE_PUSH_CREATE_ORG</code>, depending on
your use case. Set it to <strong>true</strong>.</p>
<p>This feature is documented, so you can read more about at the gitea config
<a href="https://docs.gitea.io/en-us/config-cheat-sheet/#repository-repository">cheat scheet</a>.</p>
<h2 id="the-getting-part">The getting part</h2>
<p>Now when the needed settings are in place, the next step is to clone the
repository from the BitBucket. This is the easy part, you think. We can use
gitea's migration functionality. Well, in time of writing, the stable
version of gitea is 1.11.6, which has a note in the migration interface
stating:</p>
<p><strong>Mirroring LFS objects is not supported - use <code>git lfs fetch --all</code> and
<code>git lfs push --all</code> instead.</strong></p>
<p>There is also no change of this in sight. Even though the release candidate
for 1.12 does include enhancement related to migrations, it is a logging
enabled by <a href="https://github.com/go-gitea/gitea/pull/11647">#11647</a>. So we
have to do it the hard way, let's start by cloing:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> clone git@bitbucket.org:alexilaiho/guitar-repository.git
</span></code></pre>
<p>Step back for a moment. Alexi would not be happy to find out that his
guitar strings are actually a submodule and he has his guitar, but without
them. Let's fix it:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> clone</span><span style="color:#bf616a;"> --recurse-submodules</span><span> git@bitbucket.org:alexilaiho/guitar-repository.git
</span></code></pre>
<p>This is the one I have taken from the oh-my-zsh
<a href="https://github.com/ohmyzsh/ohmyzsh/tree/master/plugins/git">git plugin</a>,
but I believe this is quite well known. Look for <code>gcl</code> alias.</p>
<p>If Alexi's guitar somehow had a development branch, he would have missed it
as well. The solution I have found is to subsequently run the pull command
with <code>--all</code> argument:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> pull</span><span style="color:#bf616a;"> --all --tags
</span></code></pre>
<p>We expected Alexi would also responsibly tag his guitar releases, so he
would like to have all relevant information pulled in as well, thus the
<code>--tags</code> argument.</p>
<p>But again, the neck of his guitar is quite a large part and could even be
distributed in a binary format, for instance as a picture in a relly huge
resolution. As a professional, he would obviously want his guitar be crisp
and sharp in every aspect possible. If he had multiple versions of this
guitar neck blob in the repository, all but the latest would not be present
in his cloned repository.</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> lfs fetch</span><span style="color:#bf616a;"> --all
</span></code></pre>
<p>Now his quitar is as complete as in the remote.</p>
<h2 id="the-cleaning-part">The cleaning part</h2>
<p>The repository now has to be prepared to be pushed to it's new home. Let's
exchange the origin frm the old one to the new one:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> remote remove origin
</span><span style="color:#bf616a;">git</span><span> remote add origin git@alexilaiho-gitea.dev/guitar-repository.git
</span></code></pre>
<p>Note that the repository URI does not need to be pre-existing. Remember,
the repository will be created on push!</p>
<h2 id="the-putting-part">The putting part</h2>
<p>So let's try it:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">git</span><span> push</span><span style="color:#bf616a;"> --all</span><span> origin
</span><span style="color:#bf616a;">git</span><span> push</span><span style="color:#bf616a;"> --tags</span><span> origin
</span><span style="color:#bf616a;">git</span><span> lfs push ---all origin master
</span></code></pre>
<p>Now Alexi would find his lovely polished guitar in his new home. Note that
this requires to have SSH access working for both repositories.</p>
<h2 id="wrapping-it-up">Wrapping it up</h2>
<p>If the plan is to do this multiple times, it is handy to put it inside a
script. I have made one that looks like this, but feel free to tweak it for
your needs.</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#65737e;">#!/bin/bash
</span><span>
</span><span style="color:#bf616a;">host</span><span>="</span><span style="color:#a3be8c;">git@bitbucket.org:YOUR-BITBUCKET-USER</span><span>"
</span><span style="color:#bf616a;">target</span><span>="</span><span style="color:#a3be8c;">git@YOUR-GITEA-SERVER:you</span><span>"
</span><span>
</span><span style="color:#96b5b4;">echo </span><span>"</span><span style="color:#a3be8c;">Migrating </span><span>$</span><span style="color:#bf616a;">1</span><span>"
</span><span>
</span><span style="color:#bf616a;">git</span><span> clone</span><span style="color:#bf616a;"> --recurse-submodules </span><span>"$</span><span style="color:#bf616a;">host</span><span style="color:#a3be8c;">/</span><span>$</span><span style="color:#bf616a;">1</span><span style="color:#a3be8c;">.git</span><span>"
</span><span style="color:#96b5b4;">cd </span><span>$</span><span style="color:#bf616a;">1
</span><span style="color:#bf616a;">git</span><span> pull</span><span style="color:#bf616a;"> --all --tags
</span><span style="color:#bf616a;">git</span><span> lfs fetch</span><span style="color:#bf616a;"> --all
</span><span style="color:#bf616a;">git</span><span> remote remove origin
</span><span style="color:#bf616a;">git</span><span> remote add origin "$</span><span style="color:#bf616a;">target</span><span style="color:#a3be8c;">/</span><span>$</span><span style="color:#bf616a;">1</span><span style="color:#a3be8c;">.git</span><span>"
</span><span style="color:#bf616a;">git</span><span> push</span><span style="color:#bf616a;"> --all</span><span> origin
</span><span style="color:#bf616a;">git</span><span> push</span><span style="color:#bf616a;"> --tags</span><span> origin
</span><span style="color:#bf616a;">git</span><span> lfs push</span><span style="color:#bf616a;"> --all</span><span> origin master
</span><span style="color:#96b5b4;">cd</span><span> ..
</span></code></pre>
<p>Hopefully it will help you. I could not find an easy way to download
multiple repositories from the BitBucket, so this still has some room for
improvement. Please, let me know about your opitions.</p>
<h2 id="conclusion">Conclusion</h2>
<p>In the beginning of the article I have asked why o people like to self host
their services. We did not find it out here, but made our life a little bit
easier with an automated solution to migrate a repository from BitBucket to
gitea server to save a little bit of time and nerves.</p>
<p>If you happen to be in the similar position as me and find this post
useful, I am curious why did you choose gitea among the alternatives. I
find it to be a pleasant experience so far. If you enjoy it too, spread the
word and help open source software to thrive!</p>
How to enable Git LFS on gitea over nginx reverse proxy2020-06-13T00:00:00+00:002020-12-22T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/how-enable-lfs-gitea-nginx-reverse-proxy/<p>When setting up gitea server behind a nginx reverse proxy, you might come
to the problem when pushing files tracked in LFS that presents itself as a
following error:</p>
<p><code>HTTP/1.1 413 Request Entity Too Large</code></p>
<p>The error itself does not specifically hint which component might be
causing the trouble. Searching the internet proven to be fruitful, however
the
<a href="https://confluence.atlassian.com/jirakb/attaching-a-file-results-in-request-entity-too-large-320602682.html">solution</a>
I have found seemed like it is not related to the problem.</p>
<p>It boils down to the fact, that by default (without setting the size limit
explicitly), the limit is 1MB, as can be seen in the
<a href="http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size">docs</a>.
The steps from the solution provided by the Atlassian are following:</p>
<p>Edit your nginx configuration - the location may be different in your
setup:</p>
<p><code># vi /etc/nginx/nginx.conf</code></p>
<p>Set the size limit, in my case under <code>http</code>:</p>
<p><code>client_max_body_size 100M</code></p>
<p>Reload nginx:</p>
<p><code>nginx -s reload</code></p>
<p>Make sure that the size is greater than <code>LFS_MAX_FILE_SIZE</code> in your
<code>app.ini</code> gitea config, if set to anything other than 0 (no limit). You can
read more about gitea config in the
<a href="https://docs.gitea.io/en-us/config-cheat-sheet/">cheatsheeet</a>.</p>
<p>I have later found that there is also a closed
<a href="https://github.com/go-gitea/gitea/issues/5805">issue</a> that is related to
the maximum size, but it is not related to LFS push, and the suggested
config also differs from mine.</p>
Why to use labels in docker-compose2020-06-13T00:00:00+00:002020-12-22T00:00:00+00:00
Unknown
https://peterbabic.dev/blog/why-use-labels-docker-compose/<p>Recently I had faced an apparently easy to solve problem that however
became little bit trickier in the end. Imagine an application that connects
to the database. Nothing super special fancy here. The <code>docker-compose.yml</code>
file could for instance look something like this:</p>
<pre data-lang="yaml" style="background-color:#2b303b;color:#c0c5ce;" class="language-yaml "><code class="language-yaml" data-lang="yaml"><span style="color:#bf616a;">version</span><span>: '</span><span style="color:#a3be8c;">3.2</span><span>'
</span><span style="color:#bf616a;">services</span><span>:
</span><span> </span><span style="color:#bf616a;">db</span><span>:
</span><span> </span><span style="color:#bf616a;">image</span><span>: </span><span style="color:#a3be8c;">postgresql:12
</span><span> </span><span style="color:#bf616a;">environment</span><span>:
</span><span> </span><span style="color:#bf616a;">POSTGRES_DB</span><span>: </span><span style="color:#a3be8c;">${POSTGRES_DB}
</span><span> </span><span style="color:#bf616a;">POSTGRES_PASSWORD</span><span>: </span><span style="color:#a3be8c;">${POSTGRES_PASSWORD}
</span><span> </span><span style="color:#bf616a;">POSTGRES_NON_ROOT_USER</span><span>: </span><span style="color:#a3be8c;">${POSTGRES_USER}
</span><span> </span><span style="color:#bf616a;">POSTGRES_NON_ROOT_USER_PASSWORD</span><span>: </span><span style="color:#a3be8c;">${POSTGRES_PASSWORD}
</span><span> </span><span style="color:#bf616a;">volumes</span><span>:
</span><span> - </span><span style="color:#a3be8c;">${STORAGE_PATH}/${INSTANCE_NAME}/db:/var/lib/postgresql/data
</span><span> </span><span style="color:#bf616a;">app</span><span>:
</span><span> </span><span style="color:#bf616a;">image</span><span>: </span><span style="color:#a3be8c;">myapp:1
</span><span> </span><span style="color:#bf616a;">depends_on</span><span>:
</span><span> - </span><span style="color:#a3be8c;">db
</span><span> </span><span style="color:#bf616a;">environment</span><span>:
</span><span> </span><span style="color:#bf616a;">INSTANCE_NAME</span><span>: </span><span style="color:#a3be8c;">${INSTANCE_NAME}
</span><span> </span><span style="color:#bf616a;">DB_HOST</span><span>: </span><span style="color:#a3be8c;">db
</span><span> </span><span style="color:#bf616a;">DB_NAME</span><span>: </span><span style="color:#a3be8c;">${POSTGRES_DB}
</span><span> </span><span style="color:#bf616a;">DB_USER</span><span>: </span><span style="color:#a3be8c;">${POSTGRES_USER}
</span><span> </span><span style="color:#bf616a;">DB_PASSWORD</span><span>: </span><span style="color:#a3be8c;">${POSTGRES_PASSWORD}
</span><span> </span><span style="color:#bf616a;">links</span><span>:
</span><span> - </span><span style="color:#a3be8c;">db:db
</span><span> </span><span style="color:#bf616a;">volumes</span><span>:
</span><span> - </span><span style="color:#a3be8c;">${STORAGE_PATH}/${INSTANCE_NAME}/data:/srv/web/data
</span><span> </span><span style="color:#bf616a;">ports</span><span>:
</span><span> - "</span><span style="color:#a3be8c;">${INSTANCE_PORT}:8080</span><span>"
</span></code></pre>
<p>We see that the <code>app</code> container is being run alongside the <code>db</code> one. After
starting the service with <code>docker-compose up -d</code> we would like to verify
that both containers are running and the ports they expose (or in case of
database hide). One of the way this could be achieved is to utilize the
<code>--format</code> argument of the <code>docker ps</code> command as follows:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>docker ps --format "table {{.Names}}\t{{.Ports}}"
</span></code></pre>
<p>The result of this command could look like this:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>NAMES PORTS
</span><span>app_1 0.0.0.0:8080->8080/tcp
</span><span>db_1 5432/tcp
</span></code></pre>
<h2 id="adding-more-daemons">Adding more daemons</h2>
<p>One of the advantages of the Docker ecosystem is that you can scale the
services horizontally. So if we wanted to run another instance of the
service on the same host, we could just copy the service directory and
tweak the variables in the <code>.env</code> file. Two variables that must be changed
are <code>INSTANCE_NAME</code> and specially <code>INSTANCE_PORT</code>. Changing port is needed
because otherwise the containers would like to bind to the same port, which
is obviously not what we want.</p>
<p>This daemon could be started the same way as the previous one. To observe
the running containers we also tweak our <code>docker ps</code> command a little bit
to differentiate between the containers belonging to the different
services:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">docker</span><span> ps</span><span style="color:#bf616a;"> --format </span><span>"</span><span style="color:#a3be8c;">table {{.ID}}\t{{.Names}}\t{{.Ports}}</span><span>"
</span></code></pre>
<p>Output could be similar to this:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>CONTAINER ID NAMES PORTS
</span><span>0cd27fa1c363 app_1 0.0.0.0:8080->8080/tcp
</span><span>bd2832c8f6fc app_1 0.0.0.0:8081->8080/tcp
</span><span>057a4002ce66 db_1 5432/tcp
</span><span>347a51256c16 db_1 5432/tcp
</span></code></pre>
<p>Note that the apps are bound to the ports 8080 an 8081 on the host
system.afsdsfdThere is one big problem in this approach. Apart from the
container ID however, depending on other aspects of the container
configuration, there could be nothing easily accessible in the whole
<code>docker ps</code> output that would help us really differentiate in a human
readable form what are those containers all about.</p>
<h2 id="labels-to-the-rescue">Labels to the rescue!</h2>
<p>One way around this problem is to use so called
<a href="htafsdsfdps://docs.docker.com/config/labels-custom-metadata/">labels</a>.
Labels allow to set metadata to most Docker objects, including:</p>
<ul>
<li>Images</li>
<li>Containers</li>
<li>Local daemons</li>
<li>Volumes</li>
<li>Networks</li>
<li>Swarm nodes</li>
<li>Swarm services</li>
</ul>
<p>Citing the documentation, labels can be used to to organize your images,
record licensing information, annotate relationships between containers,
volumes, and networks, or in any way that makes sense for your business or
application.</p>
<p>You can easily specify labelafsdsfdduring runtime, but to make it more
persistent inside Dockerfile via <code>LABEL</code> instruction:</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">LABEL</span><span> instance=red </span><span style="color:#65737e;"># or instance=blue
</span></code></pre>
<p>After assigning the labels and altering our format parameter to include the
labels (which are hidden by default):</p>
<pre data-lang="bash" style="background-color:#2b303b;color:#c0c5ce;" class="language-bash "><code class="language-bash" data-lang="bash"><span style="color:#bf616a;">docker</span><span> ps</span><span style="color:#bf616a;"> --format </span><span>"</span><span style="color:#a3be8c;">table {{.ID}}\t{{.Names}}\t{{.Ports}}\t{{.Labels}}</span><span>"
</span></code></pre>
<p>The output can look like this:</p>
<pre style="background-color:#2b303b;color:#c0c5ce;"><code><span>CONTAINER ID NAMES PORTS LABELS
</span><span>0cd27fa1c363 app_1 0.0.0.0:8080->8080/tcp instance=red
</span><span>bd2832c8f6fc app_1 0.0.0.0:8081->8080/tcp instance=blue
</span><span>057a4002ce66 db_1 5432/tcp
</span><span>347a51256c16 db_1 5432/tcp
</span></code></pre>
<p>Now this is a nice approach when the Dockerfile we use is under our
control, but this is not always the case. I would argue that this is more
of an exception than a norm.</p>
<h2 id="dockerfile-is-not-my-own">Dockerfile is not my own</h2>
<p>This issue was solved in Compose file version 3.3. ou can look up the
details in its
<a href="https://docs.docker.com/compose/compose-file/compose-file-v3/#labels">documentation</a>.
Adding labels into Compose file is more convenient when the service is
distributed to you through it.</p>
<p>For the completes, the edited Compose file could look like this:</p>
<pre data-lang="yaml" style="background-color:#2b303b;color:#c0c5ce;" class="language-yaml "><code class="language-yaml" data-lang="yaml"><span style="color:#bf616a;">version</span><span>: '</span><span style="color:#a3be8c;">3.3</span><span>' </span><span style="color:#65737e;"># version bumped to 3.3 or higher
</span><span style="color:#bf616a;">services</span><span>:
</span><span> </span><span style="color:#bf616a;">db</span><span>:
</span><span> </span><span style="color:#bf616a;">image</span><span>: </span><span style="color:#a3be8c;">postgresql:12
</span><span> </span><span style="color:#bf616a;">environment</span><span>:
</span><span> </span><span style="color:#bf616a;">POSTGRES_DB</span><span>: </span><span style="color:#a3be8c;">${POSTGRES_DB}
</span><span> </span><span style="color:#bf616a;">POSTGRES_PASSWORD</span><span>: </span><span style="color:#a3be8c;">${POSTGRES_PASSWORD}
</span><span> </span><span style="color:#bf616a;">POSTGRES_NON_ROOT_USER</span><span>: </span><span style="color:#a3be8c;">${POSTGRES_USER}
</span><span> </span><span style="color:#bf616a;">POSTGRES_NON_ROOT_USER_PASSWORD</span><span>: </span><span style="color:#a3be8c;">${POSTGRES_PASSWORD}
</span><span> </span><span style="color:#bf616a;">volumes</span><span>:
</span><span> - </span><span style="color:#a3be8c;">${STORAGE_PATH}/${INSTANCE_NAME}/db:/var/lib/postgresql/data
</span><span> </span><span style="color:#bf616a;">app</span><span>:
</span><span> </span><span style="color:#bf616a;">image</span><span>: </span><span style="color:#a3be8c;">myapp:1
</span><span> </span><span style="color:#bf616a;">depends_on</span><span>:
</span><span> - </span><span style="color:#a3be8c;">db
</span><span> </span><span style="color:#bf616a;">environment</span><span>:
</span><span> </span><span style="color:#bf616a;">INSTANCE_NAME</span><span>: </span><span style="color:#a3be8c;">${INSTANCE_NAME}
</span><span> </span><span style="color:#bf616a;">DB_HOST</span><span>: </span><span style="color:#a3be8c;">db
</span><span> </span><span style="color:#bf616a;">DB_NAME</span><span>: </span><span style="color:#a3be8c;">${POSTGRES_DB}
</span><span> </span><span style="color:#bf616a;">DB_USER</span><span>: </span><span style="color:#a3be8c;">${POSTGRES_USER}
</span><span> </span><span style="color:#bf616a;">DB_PASSWORD</span><span>: </span><span style="color:#a3be8c;">${POSTGRES_PASSWORD}
</span><span> </span><span style="color:#bf616a;">links</span><span>:
</span><span> - </span><span style="color:#a3be8c;">db:db
</span><span> </span><span style="color:#bf616a;">volumes</span><span>:
</span><span> - </span><span style="color:#a3be8c;">${STORAGE_PATH}/${INSTANCE_NAME}/data:/srv/web/data
</span><span> </span><span style="color:#bf616a;">ports</span><span>:
</span><span> - "</span><span style="color:#a3be8c;">${INSTANCE_PORT}:8080</span><span>"
</span><span> </span><span style="color:#bf616a;">labels</span><span>: </span><span style="color:#65737e;"># labels in Compose file instead of Dockerfile
</span><span> - "</span><span style="color:#a3be8c;">instance-name</span><span>": </span><span style="color:#a3be8c;">${INSTANCE_NAME}
</span></code></pre>
<p>This way, when displaying labels via the <code>docker ps</code> command used last
time, the instance name is also visible.</p>