<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Devops on dev.endevour</title><link>https://devendevour.iankulin.com/tags/devops/</link><description>Recent content in Devops on dev.endevour</description><generator>Hugo</generator><language>en-AU</language><lastBuildDate>Mon, 03 Feb 2025 00:00:00 +0000</lastBuildDate><atom:link href="https://devendevour.iankulin.com/tags/devops/index.xml" rel="self" type="application/rss+xml"/><item><title>Command chaining with NTFY for long running commands</title><link>https://devendevour.iankulin.com/command-chaining-with-ntfy-for-long-running-commands/</link><pubDate>Mon, 03 Feb 2025 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/command-chaining-with-ntfy-for-long-running-commands/</guid><description>&lt;p&gt;&lt;a href="https://ntfy.sh/" target="_blank" rel="noopener"&gt;NTFY&lt;/a&gt; is a great open-source push notification service that&amp;rsquo;s self-hostable or free to use (although I suggest you &lt;a href="https://liberapay.com/ntfy" target="_blank" rel="noopener"&gt;pay for it&lt;/a&gt; as I do). I&amp;rsquo;ve written before how I use it with &lt;a href="https://devendevour.iankulin.com/uptime-kuma-nfty/"&gt;UptimeKuma&lt;/a&gt; for my uptime monitoring, but another common use is just when I&amp;rsquo;m initiating long-running commands and backgrounding them.&lt;/p&gt;
&lt;p&gt;This magic is possible since we can just &lt;code&gt;curl&lt;/code&gt; to send a NTFY notification. For example:&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;curl -d &amp;#34;😀 demo push message via NTFY&amp;#34; ntfy.sh/blog_demo
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Since I&amp;rsquo;m subscribed to the &amp;ldquo;blog_demo&amp;rdquo; topic in NTFY, this message will be pushed to my phone and watch:&lt;/p&gt;</description></item><item><title>Moving a Docker image as a file</title><link>https://devendevour.iankulin.com/moving-a-docker-image-as-a-file/</link><pubDate>Mon, 20 Jan 2025 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/moving-a-docker-image-as-a-file/</guid><description>&lt;p&gt;I&amp;rsquo;m having a super annoying problem at the moment, I can&amp;rsquo;t pull down containers from DockerHub. If I hotspot my laptop off my phone it works fine, so it&amp;rsquo;s some drama with the home internet connection that rebooting the router does not fix.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve had a couple of different errors including &lt;code&gt;Error response from daemon: Get &amp;quot;https://registry-1.docker.io/v2/&amp;quot;: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)&lt;/code&gt; and &lt;code&gt;Error response from daemon: Get &amp;quot;https://registry-1.docker.io/v2/&amp;quot;: dial tcp: lookup registry-1.docker.io&lt;/code&gt;. I can&amp;rsquo;t actually ping &lt;code&gt;registry-1.docker.io&lt;/code&gt; or &lt;code&gt;hub.docker.com&lt;/code&gt;, although I can open hub.docker.com in a browser, so it works for ports 80 and 443, but not some other udp ports.&lt;/p&gt;</description></item><item><title>Updating a deployment on fly.io</title><link>https://devendevour.iankulin.com/updating-a-deployment-on-fly-io/</link><pubDate>Mon, 16 Dec 2024 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/updating-a-deployment-on-fly-io/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/flyio_picture.png" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve had my external UptimeKuma chugging away on &lt;a href="https://fly.io/" target="_blank" rel="noopener"&gt;fly.io&lt;/a&gt; , for free, for months now, and the container image it was based on was a bit out of date, so I wanted to update it. I hadn&amp;rsquo;t looked at fly.io for months, and couldn&amp;rsquo;t really recall what I&amp;rsquo;d done to create it.&lt;/p&gt;
&lt;p&gt;The way this works is that that you create a fly.toml file that sets out the details of your app. From memory I think I used the one from the docs and gave it a unique name, the name of the Docker image, the port, the datacentre location, and the directory for the persisted data. The you run &lt;code&gt;fly deploy&lt;/code&gt; from the directory with the toml file (having already installed the CLI tool and logged in) and you&amp;rsquo;re in business.&lt;/p&gt;</description></item><item><title>Controlling Docker container startup order</title><link>https://devendevour.iankulin.com/controlling-docker-container-startup-order/</link><pubDate>Mon, 02 Dec 2024 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/controlling-docker-container-startup-order/</guid><description>&lt;p&gt;A very common scenario when running services in Docker containers is that one service is going to depend on another. The most common example is going to be if you have a service that needs a database - you&amp;rsquo;re going to want the container running the database to be ready for business before the service that needs it starts. And conversely, when you shut things down, you want to stop the service before you kill the database or you may lose some data.&lt;/p&gt;</description></item><item><title>Fixing TLS for wget in BusyBox</title><link>https://devendevour.iankulin.com/fixing-tls-for-wget-in-busybox/</link><pubDate>Mon, 25 Nov 2024 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/fixing-tls-for-wget-in-busybox/</guid><description>&lt;p&gt;I&amp;rsquo;ve been containerising my static websites with BusyBox (because it&amp;rsquo;s small), and in &lt;a href="https://devendevour.iankulin.com/fancier-website-in-a-docker-container/"&gt;an earlier post&lt;/a&gt; showed how to even get the container to update parts of the site by reaching out with &lt;code&gt;wget&lt;/code&gt; to download resources from elsewhere and saving them inside the container where we are serving the &amp;lsquo;static&amp;rsquo; site from. I&amp;rsquo;d done this by including a bash script in the container with the &lt;code&gt;wget&lt;/code&gt; in a loop with a &lt;code&gt;sleep&lt;/code&gt;. Then started the script and the httpd server in the CMD line of the &lt;code&gt;dockerfile&lt;/code&gt;.&lt;/p&gt;</description></item><item><title>Fancier Website in a Docker Container</title><link>https://devendevour.iankulin.com/fancier-website-in-a-docker-container/</link><pubDate>Mon, 18 Nov 2024 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/fancier-website-in-a-docker-container/</guid><description>&lt;p&gt;The previous post went over how to bundle a static website into a Docker container. That&amp;rsquo;s a neat little trick - keeping the entire website and making it trivial to install on a VPS behind Nginx Proxy Manager. It worked great for most of my little websites.&lt;/p&gt;
&lt;h3 id="but"&gt;But&amp;hellip;&lt;/h3&gt; &lt;p&gt;A couple of my websites had very minor &amp;lsquo;dynamic&amp;rsquo; content. One was pulling down the local temperature from OpenWeather, then exposing a cut-down version of that as a REST endpoint so all my servers could grab it without me being rate-limited by OpenWeather for abusing my free API key. Another one re-hosted an image that changes a couple of times a day from an unreliable service.&lt;/p&gt;</description></item><item><title>Website in a Docker Container</title><link>https://devendevour.iankulin.com/website-in-a-docker-container/</link><pubDate>Mon, 11 Nov 2024 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/website-in-a-docker-container/</guid><description>&lt;p&gt;Having figured out how to use the GitHub package registry, I was a bit inspired by &lt;a href="https://lipanski.com/posts/smallest-docker-image-static-website" target="_blank" rel="noopener"&gt;this blog post&lt;/a&gt; from Florin Lipan to deliver all my little static websites as Docker containers. I&amp;rsquo;m not as focused as he is about making them tiny, but I did steal the idea of using &lt;a href="https://busybox.net/about.html" target="_blank" rel="noopener"&gt;BusyBox&lt;/a&gt; httpd for serving them, resulting in about 4MB containers. That&amp;rsquo;s small enough for me, and since they are all very similar, there&amp;rsquo;s a fair bit of layer reuse going on.&lt;/p&gt;</description></item><item><title>Using the GitHub Container Registry</title><link>https://devendevour.iankulin.com/using-the-github-container-registry/</link><pubDate>Mon, 04 Nov 2024 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/using-the-github-container-registry/</guid><description>&lt;p&gt;As the number of little projects I&amp;rsquo;m running on VPSs grows, I need to have a regimented system for managing all that. I could be using something like &lt;a href="https://coolify.io/" target="_blank" rel="noopener"&gt;Coolify&lt;/a&gt; , but, at least for the moment, I&amp;rsquo;d rather build my own system.&lt;/p&gt;
&lt;p&gt;Currently my system is Nginx Proxy Manager (dockerised) in front of each app. If it&amp;rsquo;s a static website, that&amp;rsquo;s another dockerised Nginx, started with a compose file and with &lt;code&gt;www&lt;/code&gt; and &lt;code&gt;conf&lt;/code&gt; sub-directories that I&amp;rsquo;ve &lt;code&gt;git pull&lt;/code&gt;ed from the project. It&amp;rsquo;s not pretty.&lt;/p&gt;</description></item><item><title>rsync between Synology NAS</title><link>https://devendevour.iankulin.com/rsync-between-synology-nas/</link><pubDate>Mon, 30 Sep 2024 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/rsync-between-synology-nas/</guid><description>&lt;p&gt;A while ago, I devised a complicated system where I could drop files in a web interface running on an LXD container and the files would then magically appear in a directory on a remote NAS in the morning. It turned out to not be very robust, and I gave up on it after a while.&lt;/p&gt;
&lt;p&gt;Also, really there should be no need for it - underneath, it was just using &lt;code&gt;rsync&lt;/code&gt; to move the files, so why not just do that direct from one NAS to another? Well, mainly because my NASs are all Synology - which I love, and they&amp;rsquo;ve been great, but in an effort to make them usable by muggles, Synology tend to somewhat complicate things for Linux command line wizards.&lt;/p&gt;</description></item><item><title>Containerised NGINX Proxy Manager &amp;amp; the 502 error</title><link>https://devendevour.iankulin.com/containerised-nginx-proxy-manager-the-502-error/</link><pubDate>Mon, 16 Sep 2024 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/containerised-nginx-proxy-manager-the-502-error/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/screen-shot-2024-08-24-at-6.46.49-am.png" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;If you&amp;rsquo;re used to running NGINX Proxy Manager in front of your web apps, and switch to running it in a container, you&amp;rsquo;re going to need to learn a little about Docker networks to get everything connected. If you just do your regular setup, and direct the proxy for an address to &lt;code&gt;127.0.0.1:&amp;lt;some port&amp;gt;&lt;/code&gt;, it won&amp;rsquo;t exist, and you&amp;rsquo;ll visit your page to find the &amp;ldquo;502 Bad Gateway openresty&amp;rdquo; message.&lt;/p&gt;</description></item><item><title>Moving from Docker volumes to bind mounts</title><link>https://devendevour.iankulin.com/moving-from-docker-volumes-to-bind-mounts/</link><pubDate>Mon, 05 Aug 2024 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/moving-from-docker-volumes-to-bind-mounts/</guid><description>&lt;p&gt;&lt;a href="https://placesjournal.org/article/all-is-lost-notes-on-broken-world-design/" target="_blank" rel="noopener"&gt;&lt;img src="https://devendevour.iankulin.com/images/friedman-moe-lost-6.jpg" alt="" class="img-responsive"&gt; &lt;/a&gt; &lt;/p&gt;
&lt;p&gt;When I started with Docker, the docs seemed to suggest that using Docker volumes was a good thing. With a Docker volume, you just create the volume and Docker manages the rest. You don&amp;rsquo;t have to worry about where it is, or really ever think about it.&lt;/p&gt;
&lt;p&gt;Here&amp;rsquo;s a docker-compose for &lt;a href="https://github.com/louislam/uptime-kuma/wiki" target="_blank" rel="noopener"&gt;Uptime Kuma&lt;/a&gt; using a volume.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;services:
 uptime-kuma:
 image: louislam/uptime-kuma:1
 container_name: uptime-kuma
 volumes:
 - kuma_data:/app/data
 ports:
 - 80:3001
 restart: unless-stopped

volumes:
 kuma_data:
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This is telling Docker we want to create a volume called &amp;ldquo;kuma_data&amp;rdquo; and then map it into the container file system at &lt;code&gt;/app/data&lt;/code&gt;&lt;/p&gt;</description></item><item><title>dockerfile - CMD vs ENTRYPOINT</title><link>https://devendevour.iankulin.com/dockerfile-cmd-vs-entrypoint/</link><pubDate>Mon, 22 Jul 2024 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/dockerfile-cmd-vs-entrypoint/</guid><description>&lt;p&gt;There are two entries we often have at the end of a &lt;code&gt;dockerfile&lt;/code&gt; (which is the file that tells Docker how an image is to be built).&lt;/p&gt;
&lt;p&gt;They are similar in that when the container is launched from an image, these commands will be executed. For example, both of the dockerfiles below will print &amp;ldquo;Hello World&amp;rdquo; when run.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;doc-&lt;/code&gt;entry:&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;FROM debian:stable-slim
ENTRYPOINT [&amp;#34;echo&amp;#34;, &amp;#34;Hello World from ENTRYPOINT&amp;#34;]
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;code&gt;doc-cmd&lt;/code&gt;:&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;FROM debian:stable-slim
CMD [&amp;#34;echo&amp;#34;, &amp;#34;Hello World&amp;#34;]
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/screen-shot-2024-07-03-at-1.45.26-pm.png" alt="" class="img-responsive"&gt; &lt;/p&gt;</description></item><item><title>User environment variables are not available in cron</title><link>https://devendevour.iankulin.com/user-environment-variables-are-not-available-in-cron/</link><pubDate>Mon, 15 Jul 2024 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/user-environment-variables-are-not-available-in-cron/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/screen-shot-2024-07-02-at-4.13.13-pm.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;I&amp;rsquo;m used to using the &lt;code&gt;docker-compose.yaml&lt;/code&gt; or &lt;code&gt;dockerfile&lt;/code&gt; to set environment variables for containers running my apps, but ran into an issue recently where the variable seemed to be set some of the time, but at others it didn&amp;rsquo;t appear to exist.&lt;/p&gt;
&lt;p&gt;I had a script set to run by &lt;code&gt;cron&lt;/code&gt; inside the container, and it turns out that the environment variables set for the container are available in the user space, but not in &lt;code&gt;cron&lt;/code&gt;, even if running with that user&amp;rsquo;s permissions. This is probably old news to established Linux users but it threw me for a while. I&amp;rsquo;d &lt;code&gt;exec&lt;/code&gt; into the container and the script would work perfectly, then wait another minute for &lt;code&gt;cron&lt;/code&gt; to run it and it would fail 🤦‍♀️ It was exasperated by my discovery that I didn&amp;rsquo;t know how to console.log debug from inside a container cron job as well - the subject of an earlier post.&lt;/p&gt;</description></item><item><title>SSH login notification</title><link>https://devendevour.iankulin.com/ssh-login-notification/</link><pubDate>Mon, 13 May 2024 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/ssh-login-notification/</guid><description>&lt;p&gt;&lt;a href="https://unsplash.com/photos/brown-bell-on-white-concrete-wall-4VRzuA4UxSY?utm_content=creditShareLink&amp;amp;utm_medium=referral&amp;amp;utm_source=unsplash" target="_blank" rel="noopener"&gt;&lt;img src="https://devendevour.iankulin.com/images/nick-fewings-4vrzua4uxsy-unsplash.jpg" alt="Photo by Nick Fewings Unsplash
" class="img-responsive"&gt; &lt;/a&gt; &lt;/p&gt;
&lt;p&gt;My VPS&amp;rsquo;s are usually locked down so just ports 80 &amp;amp; 443 (for web server) and 22 (for ssh) are open. That&amp;rsquo;s great for reducing the attack surface, but having ssh open is a potentially disastrous vulnerability. For this reason I often close that at the cloud firewall level as well, but it has to be open when I&amp;rsquo;m making changes or running the weekly ansible update/cleanup playbooks.&lt;/p&gt;</description></item><item><title>Upgrading to Forgejo 7.0.1</title><link>https://devendevour.iankulin.com/upgrading-to-forgejo-7-0-1/</link><pubDate>Mon, 06 May 2024 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/upgrading-to-forgejo-7-0-1/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/screen-shot-2024-04-28-at-1.08.21-pm.png" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s not that long ago that &lt;a href="https://devendevour.iankulin.com/my-web-app-update-process/"&gt;I wrote about&lt;/a&gt; doing routine upgrades on containerised web apps using Forgejo as an example as I upgraded Forgejo (my git repository manager) between patch versions of 1.21, then a few days later, they dropped 7.0.0&lt;/p&gt;
&lt;p&gt;&lt;a href="https://forgejo.org/2024-04-release-v7-0/" target="_blank" rel="noopener"&gt;They say&lt;/a&gt; the major version jump is due to it being an LTS (long term support) release, and changing to &lt;a href="https://semver.org/spec/v2.0.0.html" target="_blank" rel="noopener"&gt;semantic versioning 2.0.0&lt;/a&gt; , but that doesn&amp;rsquo;t quite explain it to me, and I assume this is partly signifying the fork&amp;rsquo;s drift away from the gitea codebase. In any case, the upgrade to 7.0.0 it does involve some breaking changes, and signifies to me that a lot has been on, which makes me keen to wait for a patch release (I&amp;rsquo;m always keen for other people to debug these things) which has now landed.&lt;/p&gt;</description></item><item><title>Peek inside a Docker image</title><link>https://devendevour.iankulin.com/peek-inside-a-docker-image/</link><pubDate>Mon, 29 Apr 2024 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/peek-inside-a-docker-image/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/screen-shot-2024-04-25-at-10.20.28-am.png" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;A &amp;lsquo;dockerfile&amp;rsquo; contains all the instructions to build a Docker image. Here&amp;rsquo;s my first draft for a project I&amp;rsquo;m working on:&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;FROM node:20
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [&amp;#34;node&amp;#34;, &amp;#34;server.js&amp;#34;]
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;code&gt;COPY . .&lt;/code&gt; is copying all of the files in my project into the working directory of the image so they can be run. Of course we don&amp;rsquo;t need them all for the app - for example the &lt;code&gt;node_modules&lt;/code&gt; directory will be created when we &lt;code&gt;npm install&lt;/code&gt; so no need to copy that, and I don&amp;rsquo;t need all my dot files in the container.&lt;/p&gt;</description></item><item><title>NGINX Proxy Manager</title><link>https://devendevour.iankulin.com/nginx-proxy-manager/</link><pubDate>Mon, 15 Apr 2024 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/nginx-proxy-manager/</guid><description>&lt;p&gt;I&amp;rsquo;ve mentioned using NGINX as an &lt;a href="https://devendevour.iankulin.com/nginx-in-front-of-a-node-js-app/"&gt;interface between the internet and a service&lt;/a&gt; a while ago. This works by all incoming traffic coming to NGINX, and NGINX determining which service that traffic should go (from the NGINX config files) then acting as a middleman. This functionality is generally referred to as a &amp;lsquo;reverse proxy&amp;rsquo;.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/nginx.png" alt="Terrible drawing of NGINX proxying requests off to different services." class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;This is nice for a few reasons:&lt;/p&gt;</description></item><item><title>My Web App Update Process</title><link>https://devendevour.iankulin.com/my-web-app-update-process/</link><pubDate>Mon, 01 Apr 2024 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/my-web-app-update-process/</guid><description>&lt;p&gt;I&amp;rsquo;ve settled on a very standard, reproducible setup for services in my homelab. This post looks at that, then runs through the update I did today to Forgejo which only took a few minutes and felt relatively risk free.&lt;/p&gt;
&lt;h3 id="standard-setups"&gt;Standard Setups&lt;/h3&gt; &lt;p&gt;My system is based around Proxmox. I have three physical machines - one for production apps, a production spare, and a development/testbed machine. A Synology NAS serves for backups. Moving a VM or LXC between the machines is trivial; but it&amp;rsquo;s done manually - the machines are not clustered for high availability.&lt;/p&gt;</description></item><item><title>Deploying a Node app in Docker</title><link>https://devendevour.iankulin.com/deploying-a-node-app-in-docker/</link><pubDate>Sun, 31 Mar 2024 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/deploying-a-node-app-in-docker/</guid><description>&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Cargo_ship#/media/File:Cargo_Ship_Puerto_Cortes.jpg" target="_blank" rel="noopener"&gt;&lt;img src="https://devendevour.iankulin.com/images/cargo_ship_puerto_cortes.jpg" alt="" class="img-responsive"&gt; &lt;/a&gt; &lt;/p&gt;
&lt;p&gt;When I wrote the install instructions for mdserver (little Markdown server Node app) on it&amp;rsquo;s &lt;a href="https://github.com/IanKulin/mdserver" target="_blank" rel="noopener"&gt;github page&lt;/a&gt; it was something like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Have node.js installed and working&lt;/li&gt;
&lt;li&gt;Clone the repo&lt;/li&gt;
&lt;li&gt;Start with &lt;code&gt;npm start&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Which is great if you know &lt;a href="https://devendevour.iankulin.com/installing-a-node-app-on-a-server/"&gt;how to do those things&lt;/a&gt; (they are bread and butter to a web dev) but not if you&amp;rsquo;re a self-hoster who just wants a web server that converts markdown to HTML on the fly. For any situation where you just want to use the app, what you probably want is a Docker image of the app.&lt;/p&gt;</description></item><item><title>Hosting Your Own Docker Registry</title><link>https://devendevour.iankulin.com/hosting-your-own-docker-registry/</link><pubDate>Mon, 25 Mar 2024 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/hosting-your-own-docker-registry/</guid><description>&lt;p&gt;&lt;a href="https://unsplash.com/photos/architectural-photography-of-cargo-containers-stack-hP4ZiN1_kdk?utm_content=creditShareLink&amp;amp;utm_medium=referral&amp;amp;utm_source=unsplash" target="_blank" rel="noopener"&gt;&lt;img src="https://devendevour.iankulin.com/images/tri-eptaroka-mardiana-hp4zin1_kdk-unsplash.jpg" alt="Photo by Tri Eptaroka Mardianam on Unsplash
" class="img-responsive"&gt; &lt;/a&gt; &lt;/p&gt;
&lt;p&gt;The Docker &lt;a href="https://docs.docker.com/subscription/core-subscription/details/" target="_blank" rel="noopener"&gt;Personal (ie free tier) plan&lt;/a&gt; currently allows one private repository, but even if you want to pay for the next level where you can have unlimited repositories, you may still want to host your own private registry - it&amp;rsquo;s going to be quicker inside your network, and you won&amp;rsquo;t run up against Docker&amp;rsquo;s pull/push limits if you are hammering it with your CI/CD system.&lt;/p&gt;</description></item><item><title>Beginning Node App Security</title><link>https://devendevour.iankulin.com/beginning-node-app-security/</link><pubDate>Fri, 16 Feb 2024 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/beginning-node-app-security/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/sciacqualani_digital_paint_illustration_of_padlock_in_a_cyber_w_6a902b1c-29a3-4f98-9f6b-411d9594550c.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;Since I&amp;rsquo;m using Tailscale to painlessly manage all my networking on the homeserver here and my remotes, I&amp;rsquo;ve had the luxury of being a bit casual about the security of my internal apps and self hosted dev tools. I&amp;rsquo;m currently iterating on a web app that requires public access, and is therefore up on a VPS and exposed to all the evils of the open internet.&lt;/p&gt;
&lt;p&gt;I am in no way a security expert, but here&amp;rsquo;s a few of the (reasonably simple) steps I&amp;rsquo;ve taken to secure my node app.&lt;/p&gt;</description></item><item><title>Fly.io, Uptime Kuma &amp;amp; scraping a status page</title><link>https://devendevour.iankulin.com/fly-io-uptime-kuma-scraping-a-status-page/</link><pubDate>Fri, 02 Feb 2024 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/fly-io-uptime-kuma-scraping-a-status-page/</guid><description>&lt;p&gt;&lt;a href="https://dribbble.com/shots/5657880-Fly-io-Logo" target="_blank" rel="noopener"&gt;&lt;img src="https://devendevour.iankulin.com/images/c1fef772e2dca5e1ab8c812f465c95a8.png" alt="" class="img-responsive"&gt; &lt;/a&gt; &lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve been aware since I set up &lt;a href="https://devendevour.iankulin.com/uptime-kuma-nfty/"&gt;Uptime Kuma&lt;/a&gt; for my monitoring, that having an instance on my local network monitoring my VPS websites wasn&amp;rsquo;t ideal. The main reason being that the flakiest part of my infrastructure is my 4G home internet, so if that goes down I have no website monitoring, and even if I did, the notifications couldn&amp;rsquo;t get out.&lt;/p&gt;
&lt;p&gt;Of course, it would also be a simple matter to run an instance on the VPS that I host the sites on, but that has a similar problem in that if the VPS goes down, so does my monitoring of the VPS. What I really need is a third, independent space to run an instance.&lt;/p&gt;</description></item><item><title>Getting Your Vite React App to Work on Github Pages</title><link>https://devendevour.iankulin.com/getting-your-vite-react-app-to-work-on-github-pages/</link><pubDate>Fri, 26 Jan 2024 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/getting-your-vite-react-app-to-work-on-github-pages/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/combined.png" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;One of the many cool things about GitHub is &lt;a href="https://pages.github.com" target="_blank" rel="noopener"&gt;GitHub Pages&lt;/a&gt; - the free web hosting Microsoft gives you while they vacuum up &lt;a href="https://docs.github.com/en/copilot/overview-of-github-copilot/about-github-copilot-individual" target="_blank" rel="noopener"&gt;your code for CoPilot&lt;/a&gt; training. Each repository you keep there can have pages at &lt;code&gt;&amp;lt;your-github-username&amp;gt;.github.io/&amp;lt;repo-name&amp;gt;&lt;/code&gt;&lt;/p&gt;
&lt;h3 id="github"&gt;GitHub&lt;/h3&gt; &lt;p&gt;To enable this, you need to go into the settings for the repository - look down the left for &amp;ldquo;Pages&amp;rdquo;.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/screen-shot-2023-12-31-at-1.58.05-pm.png" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s possible to have it based on a complicated GitHub action (where your build step happens on GitHub when you push your code), but the easiest thing is just to have it deployed from a branch. To do this you choose which branch (usually main) and whereabouts in the main branch your HTML is. The choices are in the root of your project, or in the &lt;code&gt;/docs&lt;/code&gt; directory. I&amp;rsquo;ve chosen the &lt;code&gt;/docs&lt;/code&gt; directory in the screenshot above, since my messy React project is in the root.&lt;/p&gt;</description></item><item><title>Using LXC templates in Proxmox</title><link>https://devendevour.iankulin.com/using-lxc-templates-in-proxmox/</link><pubDate>Sun, 24 Dec 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/using-lxc-templates-in-proxmox/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/unagi911_identical_female_triplets_sit_in_three_large_silver_do_d51d8006-cd33-4934-b7ab-988aecc5da7d.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;I wrote a couple of weeks ago about a &lt;a href="https://devendevour.iankulin.com/new-self-hosted-service-workflow/"&gt;standard workflow&lt;/a&gt; I use to spin up a web service in an LXC container to add to my self-hosted collection of services. It went a bit like: do this, and then this, then this other thing. Whenever you find yourself repeating a set of steps like this, it&amp;rsquo;s usually a sign that you should be automating it. Not just to save time (although this is a key benefit) but also to improve repeatability and to avoid introducing errors.&lt;/p&gt;</description></item><item><title>Gogs, Gitea, Forgejo</title><link>https://devendevour.iankulin.com/gogs-gitea-forgejo/</link><pubDate>Mon, 18 Dec 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/gogs-gitea-forgejo/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/img_7071-1.png" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve been really pleased with &lt;a href="https://devendevour.iankulin.com/tags/gogs/"&gt;Gogs&lt;/a&gt; - it&amp;rsquo;s lightweight, was simple to spin up, and has worked perfectly. But then this morning on Mastodon, there&amp;rsquo;s a &lt;a href="https://mastodon.social/@Codeberg@social.anoxinon.de/111471407276450348" target="_blank" rel="noopener"&gt;post from @Codeberg.org&lt;/a&gt; describing a security vulnerability in their Git hosting project Forgejo. This issue also apparently affects Gitea and Gogs - what&amp;rsquo;s up with that?&lt;/p&gt;
&lt;p&gt;I actually already did spend a bit of time comparing Gogs and Gitea before deciding on Gogs, since I&amp;rsquo;d heard of people running Gitea over the past year or so, but only seen that Gogs seemed to be popular with self-hosters in a Lemmy post I&amp;rsquo;d read. My first impression was that Gitea was more focused on CI/CD and seemed to have a more complicated install process.&lt;/p&gt;</description></item><item><title>Git - pushing to two remotes</title><link>https://devendevour.iankulin.com/git-pushing-to-two-remotes/</link><pubDate>Fri, 15 Dec 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/git-pushing-to-two-remotes/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/tanjian1998_an_ai_humanoid_pushing_a_shopping_cart_with_that_ha_5eceff04-704f-403d-af6d-46fd9ba57909.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;I am loving running a local Gogs instance - it&amp;rsquo;s nice pushing my git repos to a totally private hub that I know is backed up with all my other self-hosted infrastructure.&lt;/p&gt;
&lt;p&gt;Of course, there&amp;rsquo;s good reasons to have code in GitHub as well - my build-in-public philosophy, the vague possibility that some of it might be useful to someone, my contribution to our future AI overlords, and when I need to make some code linkable - for example from one of these posts. And of course there&amp;rsquo;s this bit of social-engineering which I assume was inspired by the bathroom decor in &lt;a href="https://i.pinimg.com/originals/94/23/85/9423854153f55938c454a061ad5462fe.gif" target="_blank" rel="noopener"&gt;Veronica Mars&lt;/a&gt; .&lt;/p&gt;</description></item><item><title>New Self-Hosted Service Workflow</title><link>https://devendevour.iankulin.com/new-self-hosted-service-workflow/</link><pubDate>Sun, 03 Dec 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/new-self-hosted-service-workflow/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/es047_illustration_of_a_workflow_with_only_four_text_boxes_with_b026526e-30b7-45c7-9491-080adc1594ce.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve developed a bit of a workflow for setting up a new service of some type on the homelab. Installing it is the obvious thing, but I also have a few quality of life things I do to make it a full production-quality part of my installation. I thought it might be helpful to run through those things using a recent example of adding &lt;a href="https://www.audiobookshelf.org/" target="_blank" rel="noopener"&gt;audiobookshelf&lt;/a&gt; .&lt;/p&gt;
&lt;h3 id="audiobookshelf"&gt;audiobookshelf&lt;/h3&gt; &lt;p&gt;&lt;a href="https://www.audiobookshelf.org/" target="_blank" rel="noopener"&gt;audiobookshelf&lt;/a&gt; is a web based system for viewing, playing, downloading and/or generally managing your audio books. I&amp;rsquo;ve been an &lt;a href="https://www.audible.com.au/" target="_blank" rel="noopener"&gt;Audible&lt;/a&gt; user/subscriber, but recently got grumpy at them about something - I think I had paused my subscription, and my downloaded books were still available on my phone. I was halfway through one, upgraded the app, and then wasn&amp;rsquo;t able to play the book without re-subscribing. That might not be exactly right, but it was some type of frustrating carry on like that.&lt;/p&gt;</description></item><item><title>Ansible - Importing a Playbook</title><link>https://devendevour.iankulin.com/ansible-importing-a-playbook/</link><pubDate>Thu, 30 Nov 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/ansible-importing-a-playbook/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/billyoblivion_intricate_and_highly_detailed_portable_ansible_la_c7e1c515-a2e6-4fef-b3c5-2d35e04ba09e.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;&lt;a href="https://devendevour.iankulin.com/tags/ansible/"&gt;Ansible&lt;/a&gt; is a system for automating server tasks, and these tasks are written in a special yaml file called a playbook. I had need to call one playbook from another today and learned a couple of things.&lt;/p&gt;
&lt;h3 id="plays-vs-tasks"&gt;Plays vs Tasks&lt;/h3&gt; &lt;p&gt;In Ansible we run &lt;em&gt;tasks&lt;/em&gt;. A group of tasks run against one particular sets of hosts is called a &lt;em&gt;play&lt;/em&gt;. Here is a playbook with one play, and two tasks:&lt;/p&gt;</description></item><item><title>Building Docker images for multiple architectures</title><link>https://devendevour.iankulin.com/building-docker-images-for-multiple-architectures/</link><pubDate>Mon, 20 Nov 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/building-docker-images-for-multiple-architectures/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/featured-image-shipping-containers.jpeg.webp" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;My little mdserver app has been a good way for me to start experimenting with the the devops side of things, especially building for Docker. Since I wanted to make the Docker image available for ARM Linux &amp;amp; x86 Linux I had a janky shell script that looked like this:&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;#!/bin/bash

# Extract the version number from package.json using jq
VERSION=$(jq -r .version package.json)

docker build --platform linux/amd64 -t iankulin/mdserver:$VERSION -t iankulin/mdserver:latest .
docker build --platform linux/arm64 -t iankulin/mdserver:arm64-$VERSION -t iankulin/mdserver:arm64-latest .

docker push iankulin/mdserver:arm64-$VERSION 
docker push iankulin/mdserver:arm64-latest 

docker push iankulin/mdserver:$VERSION
docker push iankulin/mdserver:latest 
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;So I&amp;rsquo;d build two different versions, and use the tags to separate them. In the registry it&amp;rsquo;d look like this:&lt;/p&gt;</description></item><item><title>Docker volume backup is more complicated than it should be</title><link>https://devendevour.iankulin.com/docker-volume-backup-is-more-complicated-than-it-should-be/</link><pubDate>Fri, 17 Nov 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/docker-volume-backup-is-more-complicated-than-it-should-be/</guid><description>&lt;p&gt;&lt;a href="https://unccelearn.org/course/view.php?id=128&amp;amp;page=overview&amp;amp;lang=en" target="_blank" rel="noopener"&gt;&lt;img src="https://devendevour.iankulin.com/images/big.jpg" alt="" class="img-responsive"&gt; &lt;/a&gt; &lt;/p&gt;
&lt;p&gt;When I set up my first Docker container (I think for &lt;a href="https://devendevour.iankulin.com/uptime-kuma-nfty/"&gt;Uptime Kuma&lt;/a&gt; ), I had read around and understood there were two choices for persistent; &lt;em&gt;bind mounts&lt;/em&gt; (where the data inside the container is effectively a symlink to a location on the local file system) or &lt;em&gt;name volumes&lt;/em&gt; where Docker abstracted that away a bit, so you didn&amp;rsquo;t have to worry where it was - I sort of understood Docker &amp;lsquo;managed&amp;rsquo; it.&lt;/p&gt;</description></item><item><title>Ansible playbook to start Proxmox hosts</title><link>https://devendevour.iankulin.com/ansible-playbook-to-start-proxmox-hosts/</link><pubDate>Sun, 05 Nov 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/ansible-playbook-to-start-proxmox-hosts/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/mick-jagger-start-me-up-video-the-rolling-stones-far-out-magazine-copy.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;&lt;a href="https://devendevour.iankulin.com/proxmox-tags-to-solve-a-problem/"&gt;In my last post&lt;/a&gt; , I talked about tagging guests in a Proxmox node so I could easily see which VMs and LXCs I needed to manually start before I ran an Ansible script to run all my &lt;code&gt;apt updates&lt;/code&gt;. It would have been reasonable to wonder why I didn&amp;rsquo;t just add things to my playbook to magically do that.&lt;/p&gt;
&lt;p&gt;The answer would be, I haven&amp;rsquo;t gotten around to it yet, so here goes:&lt;/p&gt;</description></item><item><title>Proxmox tags to solve a problem</title><link>https://devendevour.iankulin.com/proxmox-tags-to-solve-a-problem/</link><pubDate>Thu, 02 Nov 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/proxmox-tags-to-solve-a-problem/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/slacroix_save_bookmark_flat_icon_vector_online_single_social_me_113006e0-eb8e-4cff-8692-20eb0573f35d.png" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;Each weekend I run an Ansible script that updates all my apt based VMs and containers. For the production machines, that&amp;rsquo;s everything, but my dev Proxmox is full of half-finished projects. Some of these have IP addresses reserved and are in the Ansible hosts file (because whatever service they are running is almost ready to move to the production server) others do not.&lt;/p&gt;
&lt;p&gt;Long story short, the dev server has some containers and VM&amp;rsquo;s that need turned on before I run the updates, and some that don&amp;rsquo;t. I could just start them all up, for the ten minutes the updates usually take, but that seems wasteful somehow. If there was only some way to mark the ones I need to turn on in the Proxmox webgui! Well, there is. We can add tags to machines in Proxmox.&lt;/p&gt;</description></item><item><title>apt update - BADSIG 871920D1991BC93C</title><link>https://devendevour.iankulin.com/apt-update-badsig-871920d1991bc93c/</link><pubDate>Mon, 30 Oct 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/apt-update-badsig-871920d1991bc93c/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/thdgown_there_was_a_huge_dragon_guarding_the_treasure_in_the_wo_5bbc5295-9c5c-4e04-805a-912552832900.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;I have an ansible script that runs each weekend which basically does an &lt;code&gt;apt update &amp;amp;&amp;amp; apt upgrade -Y&lt;/code&gt; on every Debian based instance. This weekend it failed on one Ubuntu host. When I went it to try it manually, this was the output:&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;Hit:1 http://au.archive.ubuntu.com/ubuntu jammy InRelease
Hit:2 https://download.docker.com/linux/ubuntu jammy InRelease 
Hit:3 http://au.archive.ubuntu.com/ubuntu jammy-backports InRelease 
Hit:4 http://au.archive.ubuntu.com/ubuntu jammy-security InRelease 
Get:5 http://au.archive.ubuntu.com/ubuntu jammy-updates InRelease [119 kB] 
Err:5 http://au.archive.ubuntu.com/ubuntu jammy-updates InRelease 
 The following signatures were invalid: BADSIG 871920D1991BC93C Ubuntu Archive Automatic Signing Key (2018) &amp;lt;ftpmaster@ubuntu.com&amp;gt;
Get:6 https://pkgs.tailscale.com/stable/ubuntu jammy InRelease
Fetched 125 kB in 1s (125 kB/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
11 packages can be upgraded. Run &amp;#39;apt list --upgradable&amp;#39; to see them.
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://au.archive.ubuntu.com/ubuntu jammy-updates InRelease: The following signatures were invalid: BADSIG 871920D1991BC93C Ubuntu Archive Automatic Signing Key (2018) &amp;lt;ftpmaster@ubuntu.com&amp;gt;
W: Failed to fetch http://au.archive.ubuntu.com/ubuntu/dists/jammy-updates/InRelease The following signatures were invalid: BADSIG 871920D1991BC93C Ubuntu Archive Automatic Signing Key (2018) &amp;lt;ftpmaster@ubuntu.com&amp;gt;
W: Some index files failed to download. They have been ignored, or old ones used instead.
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id="solved"&gt;Solved&lt;/h3&gt; &lt;p&gt;The first &lt;a href="https://ubuntuforums.org/showthread.php?t=2484710" target="_blank" rel="noopener"&gt;google result&lt;/a&gt; mentions apt-cache - which &lt;a href="https://devendevour.iankulin.com/caching-apt-updates/"&gt;I also run&lt;/a&gt; , so a first level debug step is to delete the &lt;code&gt;/etc/apt/apt.conf.d/00aptproxy&lt;/code&gt; file that redirects apt requests to the cache I run in an LXC container. After that, if I re-run the &lt;code&gt;apt update&lt;/code&gt; it works perfectly. Seems like a problem with the cache then. I&amp;rsquo;m not sure why it would only affect this host though - I have other Ubuntu VM&amp;rsquo;s in the fleet that are not getting the original error.&lt;/p&gt;</description></item><item><title>Certbot - adding more virtual hosts</title><link>https://devendevour.iankulin.com/certbot-adding-more-virtual-hosts/</link><pubDate>Sun, 15 Oct 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/certbot-adding-more-virtual-hosts/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/dangling_pointer._a_central_neural_network_bathed_in_teal_and_m_9563eacf-6a8a-481d-a9e5-7fa72cabb4ea.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve got a domain that&amp;rsquo;s not currently used, so I&amp;rsquo;m going to set it up as a virtual host under NGINX. This server is already serving two domains set up with Certbot for SSL. Is it going to be possible to add another site and have Certbot manage the certificates for it after I&amp;rsquo;ve run Certbot once?&lt;/p&gt;
&lt;p&gt;When I googled around to find out, I didn&amp;rsquo;t find anything - which is usually a sign I&amp;rsquo;m either asking a wrong question, or it&amp;rsquo;s so little drama that no one ever mentions it. I decided just to move the site, check it was all working for the http version, then run Certbot and see what it said.&lt;/p&gt;</description></item><item><title>Certbot &amp;amp; Let's Encrypt are great</title><link>https://devendevour.iankulin.com/certbot-lets-encrypt-are-great/</link><pubDate>Thu, 12 Oct 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/certbot-lets-encrypt-are-great/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/certbot.png" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve been managing SSL certificates for my domains purchased from &lt;a href="https://porkbun.com/" target="_blank" rel="noopener"&gt;PorkBun&lt;/a&gt; by going there every 90 days downloading the certificates, &lt;a href="https://devendevour.iankulin.com/installing-ssl-certificates-with-nginx-on-docker/"&gt;joining them together&lt;/a&gt; to make the &lt;code&gt;fullchain.pem&lt;/code&gt; then &lt;code&gt;scp&lt;/code&gt;-ing them to my servers. That&amp;rsquo;s been sort of manageable, but less than ideal.&lt;/p&gt;
&lt;p&gt;It also doesn&amp;rsquo;t work for my Australian domains. Since there&amp;rsquo;s strict rules about who can own a domain in the &lt;code&gt;.au&lt;/code&gt; space (&lt;em&gt;you have to have some sort of right to the name - a random person can&amp;rsquo;t obtain the &lt;code&gt;coke.com.au&lt;/code&gt; domain unless that&amp;rsquo;s a trading name, a trademark, or something similar&lt;/em&gt;), they have to be managed by one of about eight organisations, and the offerings are much simpler.&lt;/p&gt;</description></item><item><title>Solved DNS Issues - Proxmox, LXC, Ubuntu, Tailscale</title><link>https://devendevour.iankulin.com/solved-dns-issues-proxmox-lxc-ubuntu-tailscale/</link><pubDate>Fri, 06 Oct 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/solved-dns-issues-proxmox-lxc-ubuntu-tailscale/</guid><description>&lt;p&gt;&lt;a href="https://i.imgur.com/WmRbmf5.png" target="_blank" rel="noopener"&gt;&lt;img src="https://devendevour.iankulin.com/images/wmrbmf5.jpg" alt="" class="img-responsive"&gt; &lt;/a&gt; &lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve picked up an new TP-Link WAP with Omada, so I wanted to spin up an Ubuntu 20.04 LXC to run the controller software in, and ended up spending a couple of hours figuring out why things where not working.&lt;/p&gt;
&lt;p&gt;The initial problem was I was having connectivity issues pulling down the updates for all the packages required. I went down a bit of a tangent because I installed an apt cache the other day, so I was looking for problems there. Eventually I narrowed it down to DNS not working and started A/B testing like this:&lt;/p&gt;</description></item><item><title>Caching APT updates</title><link>https://devendevour.iankulin.com/caching-apt-updates/</link><pubDate>Tue, 03 Oct 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/caching-apt-updates/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/quangpham2576_realistic_red_hen_that_is_serving_a_plate_of_soft_b56bccf5-82c1-4bf9-9936-edd7606ab70a.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;It&amp;rsquo;s bothered me for a while that all these VM&amp;rsquo;s are pulling down a lot of the same updates. As well as needlessly using some bandwidth, I&amp;rsquo;m hammering the update servers (that I don&amp;rsquo;t pay for) with the same requests over and over. I did briefly consider running my own mirror, but that&amp;rsquo;s not simple, plus I&amp;rsquo;d then be mirroring a heap of files in a complete repository that I&amp;rsquo;d never use. What I really needed was some sort of cache so once I&amp;rsquo;ll pulled down an update, it would hang around for a few days being available to other machines on the local network. Luckily, that exact thing exists - &lt;a href="https://www.unix-ag.uni-kl.de/~bloch/acng/html/index.html" target="_blank" rel="noopener"&gt;APT Cacher NG&lt;/a&gt; .&lt;/p&gt;</description></item><item><title>Installing service with Ansible</title><link>https://devendevour.iankulin.com/installing-service-with-ansible/</link><pubDate>Sat, 30 Sep 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/installing-service-with-ansible/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/mlguy_synthetic_woman_is_installing_her_robotic_arm_ac961357-5997-4b2a-9b50-6f91ae9a4bf7.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;Having written my little monitoring endpoint in Go, it needs pushed out to all my servers and VM&amp;rsquo;s. Clearly this is a job for Ansible which I&amp;rsquo;ve already &lt;a href="https://devendevour.iankulin.com/ansible-with-secrets/"&gt;dabbled my toes in&lt;/a&gt; . Before we get onto doing that though, we need to have a think about how to make it a service.&lt;/p&gt;
&lt;h3 id="linux-services"&gt;Linux Services&lt;/h3&gt; &lt;p&gt;A service in Linux is just a program, but one that&amp;rsquo;s usually required to be running all the time to provide some piece of functionality. The &amp;ldquo;program&amp;rdquo; can be any executable, but to allow systemd to manage it, we need to tell it a bit about what we want in a &lt;code&gt;.service&lt;/code&gt; file. This file is used by &lt;code&gt;systemd&lt;/code&gt; to know how to manage the service. They can get quite complex, but here&amp;rsquo;s the simple one for &lt;code&gt;vitals-glimpse&lt;/code&gt; - my little monitoring API endpoint.&lt;/p&gt;</description></item><item><title>Simple API endpoint in Go</title><link>https://devendevour.iankulin.com/simple-api-endpoint-in-go/</link><pubDate>Wed, 27 Sep 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/simple-api-endpoint-in-go/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/gopher.png" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;I&amp;rsquo;d like a small, quick, low load endpoint on all my nodes and VM&amp;rsquo;s that exposes a text keyword indicating if that machine is okay for RAM and disk space. I&amp;rsquo;m currently using &lt;a href="https://devendevour.iankulin.com/tags/uptime-kuma/"&gt;Uptime Kuma&lt;/a&gt; to monitor if these machines are pingable, but I&amp;rsquo;d love a tiny bit more information from them so I&amp;rsquo;d get a &lt;a href="https://devendevour.iankulin.com/uptime-kuma-nfty/"&gt;Ntfy&lt;/a&gt; buzz on my phone if a machine is in trouble.&lt;/p&gt;
&lt;p&gt;I mentioned a couple of weeks ago that the benefit of doing it in C rather than Node.js was probably not worth the trouble, but then being a fickle developer, decided to write it in Go.&lt;/p&gt;</description></item><item><title>Problems backing up LXC to NFS in Proxmox</title><link>https://devendevour.iankulin.com/problems-backing-up-lxc-to-nfs-in-proxmox/</link><pubDate>Sun, 24 Sep 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/problems-backing-up-lxc-to-nfs-in-proxmox/</guid><description>&lt;p&gt;If you create an unprivileged LXC container on Proxmox, then try to back it up to an NFS share, for example on a NAS, you&amp;rsquo;ll get an error when it tries to build the temporary file.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/screen-shot-2023-08-14-at-9.15.29-pm.png" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;The clue is in the &lt;code&gt;Permission denied&lt;/code&gt; line. It is trying to create a temporary file on my NAS, and failing because of a &lt;a href="https://devendevour.iankulin.com/could-it-be-a-permissions-problem/"&gt;permissions&lt;/a&gt; problem. If I try the same backup to the local storage, it works fine.&lt;/p&gt;</description></item><item><title>Error wiping old drive in Proxmox</title><link>https://devendevour.iankulin.com/error-wiping-old-drive-in-proxmox/</link><pubDate>Thu, 31 Aug 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/error-wiping-old-drive-in-proxmox/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/screen-shot-2023-07-22-at-12.19.42-pm-copy.png" alt="Error: disk/partition &amp;lsquo;/dev/sda3&amp;rsquo; has a holder (500)" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;When I popped in an NVME drive and freshly installed Proxmox to it, I assumed I&amp;rsquo;d just be able to wipe the SDD that had previously been the boot drive to set it up as a ZFS pool. However, when I tried to do the wipe, I was greeted with the error:&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;disk/partition &amp;#39;/dev/sda3&amp;#39; has a holder (500)
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;I assume this means there&amp;rsquo;s a flag set on one of the Proxmox partitions to prevent accidental deletion or Proxmox thought that&amp;rsquo;s where it was running from. It&amp;rsquo;s likely that it&amp;rsquo;s related to this message I had during installation that I haven&amp;rsquo;t seen before:&lt;/p&gt;</description></item><item><title>Installing a Node app on a server</title><link>https://devendevour.iankulin.com/installing-a-node-app-on-a-server/</link><pubDate>Tue, 22 Aug 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/installing-a-node-app-on-a-server/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/clu_create_an_image_where_a_cute_little_girl_stands_in_a_whimsi_45944303-8475-48ed-9b8d-d291b525138d.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;Before I write a fancy Ansible playbook to automatically set up the Nginx/Node combo on my web servers, it might be worth going through how to deploy a Node app so it can run on a server without you being logged in.&lt;/p&gt;
&lt;p&gt;Until now, I&amp;rsquo;ve been running my tests on my laptop, or in a server logged in as myself - sometimes detaching from tmux. But we need a bit more professional set up than that. The process will look something like this:&lt;/p&gt;</description></item><item><title>Digital Ocean first impressions</title><link>https://devendevour.iankulin.com/digital-ocean-first-impressions/</link><pubDate>Sat, 19 Aug 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/digital-ocean-first-impressions/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/peacenode.eth_an_1970s_advert_for_blockchain_deep_sea_ocean_com_78fb2b2f-24c1-4d4f-a703-3f22bada628f_webp.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve been thinking about the time it takes me to provision a guest VM in Proxmox. I seem to remember on &lt;a href="https://www.binarylane.com.au/" target="_blank" rel="noopener"&gt;BinaryLane&lt;/a&gt; it was seconds rather than minutes. This seemed to be a good excuse to use the free credit I&amp;rsquo;ve heard about for &lt;a href="https://www.linode.com/lp/free-credit-100/?promo=sitelin100-02162023&amp;amp;promo_value=100&amp;amp;promo_length=60&amp;amp;utm_source=google&amp;amp;utm_medium=cpc&amp;amp;utm_campaign=11178784684_109179223363&amp;amp;utm_term=g_kwd-2629795801_e_linode&amp;amp;utm_content=466889596558&amp;amp;locationid=1000676&amp;amp;device=c_c&amp;amp;gclid=CjwKCAjw-7OlBhB8EiwAnoOEk9lQtzb_l17rAJmoU1KzhTUcWc6TF6C8KBTZU3j6tJ3d1qLWqqiRgxoC6qUQAvD_BwE" target="_blank" rel="noopener"&gt;Linode&lt;/a&gt; or Digital Ocean hundreds of times in podcast adverts, so I claimed the &lt;a href="http://do.co/lnl" target="_blank" rel="noopener"&gt;$200 credit for being a Late Night Linux listener&lt;/a&gt; at Digital Ocean. They extracted $5 out of me in the process, so I guess they are in front on that transaction. $200 would run a little VM for a couple of years at their rates, but of course it&amp;rsquo;s limited to two months, at the end of which I will have an account sitting there, with my credit card already recorded - so all the friction is gone if I need an internet facing machine for some purpose - which is clearly their dastardly plan&lt;/p&gt;</description></item><item><title>Ansible with Secrets</title><link>https://devendevour.iankulin.com/ansible-with-secrets/</link><pubDate>Sun, 13 Aug 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/ansible-with-secrets/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/danbearpig_construction_process_photos_of_an_enormous_hyper-sec_4bbf6350-647d-4e32-971b-cd2041cb52a9_webp.jpg" alt="Two men standing in front of a giant vault door" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;We wrote a nice &lt;a href="https://devendevour.iankulin.com/first-ansible-playbook/"&gt;little Ansible playbook&lt;/a&gt; the other day to install nginx on our web servers and ensure it was running. We were able to store the usernames in the &lt;code&gt;hosts&lt;/code&gt; inventory file using the a&lt;code&gt;nsible_ssh_user&lt;/code&gt; variable. Then, we ran the playbook with the command:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;ansible-playbook web_installs.yaml --ask-become-pass&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;This asked us the password to use with the usernames in the &lt;code&gt;hosts&lt;/code&gt; file. Luckily that day, it was the same username/password combo to use for sudo on every server. What happens if that&amp;rsquo;s not the case? Here&amp;rsquo;s our new hosts file for today. There&amp;rsquo;s a cool new sysadmin in town - Jane.&lt;/p&gt;</description></item><item><title>Finding the host IP from inside a Docker container</title><link>https://devendevour.iankulin.com/finding-the-host-ip-from-inside-a-docker-container/</link><pubDate>Mon, 07 Aug 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/finding-the-host-ip-from-inside-a-docker-container/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/ak_writer_the_lost_whale_story_e5979736-74f1-4404-9dd1-8c6c1047c244.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;Having successfully set up and tested my node.js api handling app behind nginx on a development VM in the homelab, I decided to move it to my VPS so I could start using it for real. I had a bit of trouble finding the nginx.conf files on the VPS, until I remembered I was running nginx in a docker container on this machine!&lt;/p&gt;
&lt;p&gt;I got everything set up, I could hit the domain in a web browser and get served the static page, and I could &amp;lt;domain_name&amp;gt;:3000/api/gnp_temp.txt and get the file delivered by the node script, but if I tried &amp;lt;domain_name&amp;gt;/api/gnp_temp.txt - &amp;ldquo;Bad Gateway&amp;rdquo;.&lt;/p&gt;</description></item><item><title>nginx in Front of a node.js app</title><link>https://devendevour.iankulin.com/nginx-in-front-of-a-node-js-app/</link><pubDate>Fri, 04 Aug 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/nginx-in-front-of-a-node-js-app/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/jonaslittorin_strictly_digital_content_web_server_technology_we_fad86a29-71f0-439c-9900-2134fea30897.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;NGINX is a great webserver and reverse proxy - as in it can hand off requests to other web-servers. That&amp;rsquo;s the situation I want to have set up on my VPS. I want NGINX to handle incoming requests - some of them will just be sorted out by returning static HTML, others (like the weather api I&amp;rsquo;ve been playing with) need to be handed off to other services to respond to.&lt;/p&gt;</description></item><item><title>First Ansible Playbook</title><link>https://devendevour.iankulin.com/first-ansible-playbook/</link><pubDate>Wed, 26 Jul 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/first-ansible-playbook/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/biomage_biomechanical_cyborg_computer_hacker_keyboard_protrudin_3d895c1b-0776-4f6e-b1a6-733b5622ea5d.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;In the &lt;a href="https://devendevour.iankulin.com/getting-started-with-ansible/"&gt;previous post&lt;/a&gt; , we looked at getting up and running with Ansible, including using the ad-hoc mode to send commands to our servers. We had a inventory file called hosts that had groups of server IP addresses and a simple &lt;code&gt;ansible.cfg&lt;/code&gt; file that pointed to our inventory file.&lt;/p&gt;
&lt;h3 id="playbooks"&gt;Playbooks&lt;/h3&gt; &lt;p&gt;Ansible playbooks are used to collect together a description of the state we want in a server. When the playbook is executed, Ansible figures out what things need need changed, and changes them. If you&amp;rsquo;re used to the procedural nature of a bash script, where things proceed from one step to the next, and there might be decision branches, this requires an adjustment in your thinking. This is similar to the adjustment I had getting my head around &lt;a href="https://betterprogramming.pub/swiftui-understanding-declarative-programming-aaf05b2383bd" target="_blank" rel="noopener"&gt;SwiftUI&lt;/a&gt; , and moving from JS to &lt;a href="https://levelup.gitconnected.com/why-react-is-declarative-a300d1e930b7?gi=3d11485226b4" target="_blank" rel="noopener"&gt;React&lt;/a&gt; .&lt;/p&gt;</description></item><item><title>Proxmox 8.0 Install</title><link>https://devendevour.iankulin.com/proxmox-8-0-install/</link><pubDate>Sun, 23 Jul 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/proxmox-8-0-install/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/alaviles_experience_the_gold_standard_in_local_desktop_virtuali_f1a1d3a4-d7b1-489f-be57-41388033eea1.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;I&amp;rsquo;m normally a x.1 release type of sysadmin, but the increasing temptation of installing Proxmox 8.0 while I&amp;rsquo;ve got some time off, and the fact that I&amp;rsquo;ve got a cluster, so I can just move the VM&amp;rsquo;s around all adds up to thinking I&amp;rsquo;ll do that today.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/cluster-2.png" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;Here&amp;rsquo;s how my system works. It consists of three HP-800 mini G2&amp;rsquo;s. &lt;code&gt;pve-prod1&lt;/code&gt; is a bit fancier - i7 6700T and 32GB, the other two are i5 6500T and 16GB. The production VM&amp;rsquo;s use the local SSD but backups go to the NAS. All the machines are currently running Proxmox 7.4. They are not clustered in the proper sense - I don&amp;rsquo;t need high availability, and I don&amp;rsquo;t want to run them all the time. &lt;code&gt;pve-prod1&lt;/code&gt; runs 24/7 and I just power up &lt;code&gt;pve-dev1&lt;/code&gt; when I&amp;rsquo;m working on something.&lt;/p&gt;</description></item><item><title>Getting Started with Ansible</title><link>https://devendevour.iankulin.com/getting-started-with-ansible/</link><pubDate>Wed, 19 Jul 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/getting-started-with-ansible/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/cyberpunk_24_k_hyper_realistic_a_thousand_details_hyper_detaile_841f4769-e869-497f-a804-c9fade21e150.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;Ansible is a system for executing commands on remote systems. It allows a declarative approach - so if you run a playbook (the system configuration files are called playbooks) that says a system has a Docker container running Jellyfin, Ansible will check if that&amp;rsquo;s true, and if not, make it so. Ansible is best used when you have a large number of systems to maintain, but even with a small number, it serves to document systems as well as to automate their creation.&lt;/p&gt;</description></item><item><title>How to recover a docker run command</title><link>https://devendevour.iankulin.com/how-to-recover-a-docker-run-command/</link><pubDate>Sun, 16 Jul 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/how-to-recover-a-docker-run-command/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/andywatt83_a_developer_environment_in_a_container_using_docker_051f6abb-8c38-4b2d-85cf-7c3f8744118b.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;Imagine if, lets say hypothetically, you&amp;rsquo;d set up an application months ago with a &lt;code&gt;docker run&lt;/code&gt; command. Then you&amp;rsquo;d heard there had been an update to the app because of a security update. So you need to stop/remove the container, pull a new image and restart it, trouble is, you don&amp;rsquo;t remember the exact &lt;code&gt;run&lt;/code&gt; command you used to start it.&lt;/p&gt;
&lt;p&gt;This didn&amp;rsquo;t happen to me, since all my vm setups are in git as markdown (I&amp;rsquo;m pre-Ansible), but I did google how to do this thinking that there would be an easy way before I bothered to look through my config files.&lt;/p&gt;</description></item><item><title>Updating SSL Certificates</title><link>https://devendevour.iankulin.com/updating-ssl-certificates/</link><pubDate>Wed, 12 Jul 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/updating-ssl-certificates/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/0_logo_minimal_modern_vector_it_tools_security_anonymous_vuln_31d19059-50fd-4809-bff1-a13ef295807e.png" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;When I first installed my SSL certificates, &lt;a href="https://devendevour.iankulin.com/installing-ssl-certificates-with-nginx-on-docker/"&gt;I mentioned&lt;/a&gt; it&amp;rsquo;s a process I need to automate before they came up for expiry, but here we are ten days out, and I haven&amp;rsquo;t done that yet, but I have been keeping an eye on it though the excellent display and notifications set up in &lt;a href="https://devendevour.iankulin.com/uptime-kuma-nfty/"&gt;Uptime Kuma&lt;/a&gt; .&lt;/p&gt;
&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/screen-shot-2023-07-10-at-5.36.01-pm.png" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;Updating the certificates is easy. When I went into the site at PorkBun (where I purchased the domain and who do the primary DNS for the site, the next certificates were sitting there to be downloaded. My existing certificates were due to expire on 30th July, and these had been generated on 3rd July.&lt;/p&gt;</description></item><item><title>How to deploy a Node.js app</title><link>https://devendevour.iankulin.com/how-to-deploy-a-node-js-app/</link><pubDate>Wed, 05 Jul 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/how-to-deploy-a-node-js-app/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/naresh_create_a_github_account_and_a_new_repository._install_gi_c8bce4b2-201f-422b-815c-bb6286fb000a.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;This is one of those things that is simple once you know it. I had my &lt;a href="https://devendevour.iankulin.com/using-node-js-to-return-a-static-file/"&gt;tiny Node service working&lt;/a&gt; on my MacBook, but how do I run it on the server?&lt;/p&gt;
&lt;h3 id="native-or-container"&gt;Native or Container&lt;/h3&gt; &lt;p&gt;Obviously I need Node.js installed on the server, should I have it in a Docker container, or native on the machine. There&amp;rsquo;s no clear answer here - in a container set up with Docker Compose might be more in line with my ideology of treating machines as disposable, but a native install is simpler, and I probably want to make life simpler at this stage when I&amp;rsquo;m learning everything.&lt;/p&gt;</description></item><item><title>Containers</title><link>https://devendevour.iankulin.com/containers/</link><pubDate>Sun, 07 May 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/containers/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/troop_team_of_4_programmers_productively_working_standing_with__cb8656e7-ffd0-41df-b5bb-778ff18fd910.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;There&amp;rsquo;s a few things that really strike me as significant improvements to life since I was commercially developing 20 years ago:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Accessing information - the first time I &lt;em&gt;bought&lt;/em&gt; the development stack to write commercial software against the Windows SDK it came in a huge carton with, I guess, fifteen or so 2&amp;quot; thick books. That was how you looked things up in those days. Fast forward to an internet connected world of websites, stack exchange, Discord and ChatGPT. So much better.&lt;/li&gt;
&lt;li&gt;Open Source - is an actual useful thing that the entire connected world runs on - not just a weird hippy idea. It&amp;rsquo;s almost routine to open source your code now and everyone benefits from that.&lt;/li&gt;
&lt;li&gt;Containers - &amp;ldquo;getting things working&amp;rdquo; used to be a thing. Most times now I want to spin something up to play with it, it just works because all the dependencies are bundled with it, and it doesn&amp;rsquo;t mutate the environment in any way I don&amp;rsquo;t know about. There&amp;rsquo;s no friction to run a giant app, and no hangover for the OS when I nuke it.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I love this great explanation from Coderized about containers - I wish I&amp;rsquo;d seen it five months ago.&lt;/p&gt;</description></item><item><title>Git/GutHub - macOS - marking file as executable</title><link>https://devendevour.iankulin.com/git-guthub-macos-marking-file-as-executable/</link><pubDate>Sun, 30 Apr 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/git-guthub-macos-marking-file-as-executable/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/uwillc_a_computer_screen_displaying_the_github_page_3622791d-5c28-458b-acac-8f2ca2066179.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;I&amp;rsquo;m working on the world&amp;rsquo;s shortest shell script - it&amp;rsquo;s called by &lt;code&gt;cron&lt;/code&gt; to pull down a JSON weather report to a text file using &lt;code&gt;curl&lt;/code&gt; so I can expose it on an Nginx endpoint. The purpose is to allow me to hammer that weather API from multiple machines I control without violating the TOS of my free API key.&lt;/p&gt;
&lt;p&gt;Because I&amp;rsquo;m learning all the things, instead of just creating this on the VPS where it runs, it&amp;rsquo;s cloned from my GitHub repo for that machine. I&amp;rsquo;m creating and editing the file in VS Code on macOS, pushing to Github, then pulling the changes on the Ubuntu VPS. The intention is that this will eventually become automated with a Github action.&lt;/p&gt;</description></item><item><title>Installing SSL Certificates with Nginx on Docker</title><link>https://devendevour.iankulin.com/installing-ssl-certificates-with-nginx-on-docker/</link><pubDate>Sat, 29 Apr 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/installing-ssl-certificates-with-nginx-on-docker/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/starliu_building_trust_with_ai_challenges_and_solutions_a519169f-8b94-4b34-88d9-e2e635bc5996.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;When you&amp;rsquo;ve successfully got Nginx running in a Docker container, AND got your &lt;a href="https://devendevour.iankulin.com/adding-a-domain-name-to-a-vps/"&gt;domain correctly pointing&lt;/a&gt; at your nascent website, you&amp;rsquo;re then going to want to set it up for encrypted, and therefore trusted, browsing with SSL.&lt;/p&gt;
&lt;h3 id="certificates"&gt;Certificates&lt;/h3&gt; &lt;p&gt;A couple of posts ago, I &lt;a href="https://devendevour.iankulin.com/adding-a-domain-name-to-a-vps/"&gt;mentioned&lt;/a&gt; that it was simpler to let Porkbun be the authoritative nameserver for a domain. Part of the reason for that is that if we do that, Porkbun had a button you can press which connects to LetsEncrypt and generates the certificates for you. This usually takes an hour or so, then you&amp;rsquo;ll be able to download the bundle from that same page.&lt;/p&gt;</description></item><item><title>Adding a Domain Name to a VPS</title><link>https://devendevour.iankulin.com/adding-a-domain-name-to-a-vps/</link><pubDate>Fri, 28 Apr 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/adding-a-domain-name-to-a-vps/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/sjramblings_io_aws_route_53_resolver_is_a_dns_resolution_servic_227bbb4f-1ff3-455d-84fa-5e8ea4310df8_png_92.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;I&amp;rsquo;ve had a small &lt;a href="https://www.binarylane.com.au/" target="_blank" rel="noopener"&gt;BinaryLane VPS&lt;/a&gt; for a while that I use for homelab type stuff, but now need to serve a tiny amount of JSON from it. A longer term plan is to use it as a &lt;a href="https://www.wireguard.com/" target="_blank" rel="noopener"&gt;Wireguard&lt;/a&gt; tunnel back to my cluster at home to expose the services that need to be internet facing. I&amp;rsquo;ve also had a domain name I bought from &lt;a href="https://porkbun.com/products/domains" target="_blank" rel="noopener"&gt;Porkbun&lt;/a&gt; sitting round for a bit, so it&amp;rsquo;s probably a good time to join them up.&lt;/p&gt;</description></item><item><title>Using NAS for Proxmox backups</title><link>https://devendevour.iankulin.com/using-nas-for-proxmox-backups/</link><pubDate>Mon, 10 Apr 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/using-nas-for-proxmox-backups/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/pisskatt_wrapped_eth_cryptocurrency_coins_wrapped_8k_2fe1bfde-8bed-4851-ac42-6dc00e4ef98f.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;&lt;a href="https://devendevour.iankulin.com/moving-a-vm-between-two-proxmox-hosts/"&gt;A few weeks ago&lt;/a&gt; , I was very excited to be able to take a snapshot of a virtual machine, copy it across the network from that Proxmox node, copy it back across the network to a different Proxmox node, start it there, and have it up and running, without it noticing it was actually on different hardware.&lt;/p&gt;
&lt;p&gt;Backing up a VM is pretty simple, you just click on the node, choose &lt;em&gt;Backup&lt;/em&gt; and click the &lt;em&gt;Backup Now&lt;/em&gt; button. The ease, and completeness of backing up a VM is one of the main reasons I&amp;rsquo;m using Proxmox for my systems.&lt;/p&gt;</description></item><item><title>Proxmox VM Memory Upgrade</title><link>https://devendevour.iankulin.com/proxmox-vm-memory-upgrade/</link><pubDate>Sun, 19 Mar 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/proxmox-vm-memory-upgrade/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/screen-shot-2023-03-16-at-6.36.10-pm.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;I ordered some RAM this week for my production server - it&amp;rsquo;s quickly becoming clear that memory is the limiting factor when running lots of services and VM&amp;rsquo;s that don&amp;rsquo;t get much use - rather than processing power. I&amp;rsquo;m not really a hardware guy, so figuring out exactly what RAM I need is a slightly fraught process - I won&amp;rsquo;t be fully confident I&amp;rsquo;ve ordered the right thing until I install it, boot up, and see my &lt;a href="https://support.hp.com/us-en/product/hp-elitedesk-800-35w-g2-desktop-mini-pc/7633266/document/c04816235" target="_blank" rel="noopener"&gt;G2 800&lt;/a&gt; come to life maxed out at 32GB.&lt;/p&gt;</description></item><item><title>Accessing a Synology NAS from Linux</title><link>https://devendevour.iankulin.com/accessing-a-synology-nas-from-linux/</link><pubDate>Mon, 20 Feb 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/accessing-a-synology-nas-from-linux/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/img_4154x.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;I picked up a Synology DS216j NAS from eBay to use for storage for the rapidly growing home lab. The eventual plan is that as well as my VM backups, it will host the media library, and eventually (when this has all proved itself reasonably bullet-proof) my current DropBox contents. That won&amp;rsquo;t all fit on the 2x2TB drives that the DS216j came with, and I have a pair of 8TBs on hand, but I wanted to set it up and checked it all worked.&lt;/p&gt;</description></item><item><title>Configuring Proxmox for Free Use</title><link>https://devendevour.iankulin.com/configuring-proxmox-for-free-use/</link><pubDate>Thu, 16 Feb 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/configuring-proxmox-for-free-use/</guid><description>&lt;p&gt;I installed Proxmox on my second server last night, and tonight when I ran &lt;code&gt;apt update&lt;/code&gt; I ran into the error you get when you haven&amp;rsquo;t bought a license.&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;Err:5 https://enterprise.proxmox.com/debian/pve bullseye InRelease 
 401 Unauthorized [IP: 103.67.14.50 443]
Reading package lists... Done 
E: Failed to fetch https://enterprise.proxmox.com/debian/pve/dists/bullseye/InRelease 401 Unauthorized [IP: 103.67.14.50 443]
E: The repository &amp;#39;https://enterprise.proxmox.com/debian/pve bullseye InRelease&amp;#39; is not signed.
N: Updating from such a repository can&amp;#39;t be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Even though I guess it was only a month ago (let that sink in people who think the raspberry Pi they just bought is going to be the last homelab hardware they buy 😊) since I set up my first Proxmox server, I&amp;rsquo;d already forgotten there&amp;rsquo;s a step to enable it to get updates without a subscription.&lt;/p&gt;</description></item><item><title>Moving a VM between two Proxmox hosts</title><link>https://devendevour.iankulin.com/moving-a-vm-between-two-proxmox-hosts/</link><pubDate>Thu, 16 Feb 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/moving-a-vm-between-two-proxmox-hosts/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/s-l640.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;So, the very small datacentre has undergone a major hardware upgrade today. The HP 800 G1 is joined by an HP 800 G2. Four core i7 vs the old two core i5. Double the RAM to 16GB, four times the disk. The old machine will become a dev/play machine - still virtualised, and the new machine will run the production apps, mostly in Docker containers.&lt;/p&gt;
&lt;p&gt;Since everything is containerised, I did consider running Unbuntu Server on the bare metal of the new machine, but running it on Proxmox will give me some flexibility, and since we&amp;rsquo;ve stepped up the underlying hardware resource so substantially, performance will be well in front anyway. Plus it will give me some flexibility if needed in the future.&lt;/p&gt;</description></item><item><title>Uptime Kuma &amp;amp; NFTY</title><link>https://devendevour.iankulin.com/uptime-kuma-nfty/</link><pubDate>Wed, 15 Feb 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/uptime-kuma-nfty/</guid><description>&lt;p&gt;&lt;a href="https://github.com/louislam/uptime-kuma" target="_blank" rel="noopener"&gt;Uptime Kuma&lt;/a&gt; is a monitoring tool suitable for self-hosting, and as well as being a good tool for monitoring the status of your network and applications, it&amp;rsquo;s a nice smallish app to get started on Docker containers.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/screen-shot-2023-02-05-at-6.41.24-am.png" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;Since it&amp;rsquo;s in a container, you need to create a volume for it and pass it in to persist your settings. Then it&amp;rsquo;s just a matter of adding each item you want to monitor. There&amp;rsquo;s a heap of fancy options for this, the only three I&amp;rsquo;ve used are ping - just pings an address, http(s) - requests a page and checks the header for a 200, and http(s) keyword - looks at the returned page for a keyword in the html.&lt;/p&gt;</description></item><item><title>ssh key login on VPS</title><link>https://devendevour.iankulin.com/ssh-key-login-on-vps/</link><pubDate>Sun, 12 Feb 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/ssh-key-login-on-vps/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/pucker_side_view_of_a_female_knight_walking_up_to_a_castle_door_645ac316-6393-4e33-8199-36bf31d88b53.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;Due to &lt;a href="https://devendevour.iankulin.com/chinese-hackers-want-to-steal-my-hello-world-container/"&gt;potential brute force attacks&lt;/a&gt; , it&amp;rsquo;s a good idea to turn off password access via shh and instead rely on ssh keys. In this post, I&amp;rsquo;ll run through that process.&lt;/p&gt;
&lt;h4 id="generating-your-key"&gt;Generating your key&lt;/h4&gt; &lt;p&gt;On a mac (or actually most *ix systems), your ssh keys live in the &lt;code&gt;.ssh&lt;/code&gt; directory inside the users home directory. Since it starts with a period, it&amp;rsquo;s a &amp;lsquo;hidden&amp;rsquo; directory. To see it in Finder press&lt;/p&gt;</description></item><item><title>Save Proxmox password in Chrome</title><link>https://devendevour.iankulin.com/save-proxmox-password-in-chrome/</link><pubDate>Sat, 11 Feb 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/save-proxmox-password-in-chrome/</guid><description>&lt;p&gt;When I installed Proxmox, I&amp;rsquo;d used a secure, and therefore absurdly long and complicated root password. I do use a password manager, but don&amp;rsquo;t have it integrated into Chrome, so it was buggging me having to find it and paste it in each time - why wasn&amp;rsquo;t Chrome offering to save it for me?&lt;/p&gt;
&lt;p&gt;Well, you&amp;rsquo;d guess it was something to do with this. I feel like Chrome is trying to tell me something here:&lt;/p&gt;</description></item><item><title>Saved by the qemu_guest_agent</title><link>https://devendevour.iankulin.com/saved-by-the-qemu_guest_agent/</link><pubDate>Fri, 10 Feb 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/saved-by-the-qemu_guest_agent/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/pucker_photo_of_female_cyborg_holding_a_small_child_in_her_arms_ac9cb085-3dd4-444b-8a0c-6dafc5b48418.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;Literally an hour after I wrote the post &lt;a href="https://devendevour.iankulin.com/proxmox-qemu-guest-agent/"&gt;about installing the qemu guest agent&lt;/a&gt; in a VM and explaining how it can be used to inject root level commands into a VM, I had use of it due to a mistake.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;d decided to add myself to the sudoers file. Since the last line in that file is a directive to include all the files in the /etc/sudoers.d directory, the accepted way to do that for local changes is to create a file in that directory with the necessary commands.&lt;/p&gt;</description></item><item><title>Proxmox - Qemu-guest-agent</title><link>https://devendevour.iankulin.com/proxmox-qemu-guest-agent/</link><pubDate>Thu, 09 Feb 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/proxmox-qemu-guest-agent/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/pucker_large_stone_wall_with_a_crack_of_sunlight_shining_throug_b2b090d2-7855-4170-9c5c-a899b205668d.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;One of the strengths of having virtual machines (VMs) running inside a hypervisor like Proxmox is how they are isolated from each other and their host. This is a strength - if there is a problem with a particular VM nothing else should be affected by it.&lt;/p&gt;
&lt;p&gt;But this can also be a pain if the hypervisor needs access to a VM to control or monitor it in some way that&amp;rsquo;s only possible from inside the VM. Proxmox can use the &lt;a href="https://qemu-project.gitlab.io/qemu/interop/qemu-ga.html" target="_blank" rel="noopener"&gt;Qemu Guest Agent&lt;/a&gt; for this purpose. To over simplify, this is a deamon that runs in the VM and opens a unix socket/virtual serial port to the hypervisor, and listens for commands on it. With Proxmox, the main use of this is to aid in orderly shutdowns and backups, but it also allows us to run commands in the VM from Proxmox - an obvious security compromise. You definitely would not want to install this daemon on a hosted VPS.&lt;/p&gt;</description></item><item><title>SSH &amp;amp; the scary warning</title><link>https://devendevour.iankulin.com/ssh-the-scary-warning/</link><pubDate>Wed, 08 Feb 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/ssh-the-scary-warning/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/screen-shot-2023-01-28-at-8.41.11-pm.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;The first time you connect to a new server with ssh, it asks you something like:&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;➜ ~ &amp;gt; ssh ian@192.168.100.20 
The authenticity of host &amp;#39;192.168.100.20 (192.168.100.20)&amp;#39; can&amp;#39;t be established.
ED25519 key fingerprint is SHA256:ZcNTcOjO/0fOLC5iNChf8Q8MHN7z2d+VV0qz7XqH1g4.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added &amp;#39;192.168.100.20&amp;#39; (ED25519) to the list of known hosts.
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Once you&amp;rsquo;ve said yes, it adds the server &amp;lsquo;fingerprint&amp;rsquo; to the known hosts file, then next time you ssh there, it feels safe - we know this server.&lt;/p&gt;</description></item><item><title>Proxmox - Installing a Virtual Machine</title><link>https://devendevour.iankulin.com/proxmox-installing-a-virtual-machine/</link><pubDate>Tue, 07 Feb 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/proxmox-installing-a-virtual-machine/</guid><description>&lt;p&gt;Installing your first virtual machine (VM) in the Proxmox hypervisor is pretty straightforward. This post runs through those steps using Proxmox 7.3.&lt;/p&gt;
&lt;p&gt;You need an operating system for your virtual machine, I&amp;rsquo;m going to use &lt;a href="https://ubuntu.com/download/server" target="_blank" rel="noopener"&gt;Ubuntu server&lt;/a&gt; in this example, but it could just as easily be &lt;a href="https://www.microsoft.com/en-us/evalcenter/evaluate-windows-server-2016-essentials" target="_blank" rel="noopener"&gt;Windows server&lt;/a&gt; , or regular windows, or one of the desktop Linux distributions. Whichever you decide, you&amp;rsquo;ll need to find and download the ISO for it. The ISO is a (usually quite large) file needed to install the operating system.&lt;/p&gt;</description></item><item><title>sudo Incident Reports - where do they go?</title><link>https://devendevour.iankulin.com/sudo-incident-reports-where-do-they-go/</link><pubDate>Sat, 04 Feb 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/sudo-incident-reports-where-do-they-go/</guid><description>&lt;p&gt;Even though it&amp;rsquo;s &lt;em&gt;my&lt;/em&gt; server, I still have a pang of guilt when this happens.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/screen-shot-2023-01-28-at-10.40.43-am-copy.png" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;I always imagine &lt;a href="https://en.wikipedia.org/wiki/Richard_Stallman" target="_blank" rel="noopener"&gt;Richard Stallman&lt;/a&gt; (or someone with a similar 2000&amp;rsquo;s database administrator beard) looking at me disappointedly and shaking his head slowly.&lt;/p&gt;
&lt;p&gt;It does raise the question though - since it&amp;rsquo;s my server, shouldn&amp;rsquo;t I be getting a text message from CERN or something?&lt;/p&gt;
&lt;h4 id="where-is-this-report"&gt;Where is this report?&lt;/h4&gt; &lt;p&gt;(&lt;a href="https://xkcd.com/838/" target="_blank" rel="noopener"&gt;Relevant xkcd&lt;/a&gt; )&lt;/p&gt;
&lt;p&gt;Like everything, the answer is &amp;lsquo;it&amp;rsquo;s logged&amp;rsquo;. We can use the &lt;code&gt;journalctl&lt;/code&gt; command to look at the logs, on this server that&amp;rsquo;s been running less than 20 hours, there&amp;rsquo;s already several thousand lines to look through if you just enter &lt;code&gt;journalctl&lt;/code&gt;, so I&amp;rsquo;m going to just send all the high priority logs to a file:&lt;/p&gt;</description></item></channel></rss>