<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Ssd on dev.endevour</title><link>https://devendevour.iankulin.com/tags/ssd/</link><description>Recent content in Ssd on dev.endevour</description><generator>Hugo</generator><language>en-AU</language><lastBuildDate>Sat, 09 Sep 2023 00:00:00 +0000</lastBuildDate><atom:link href="https://devendevour.iankulin.com/tags/ssd/index.xml" rel="self" type="application/rss+xml"/><item><title>Basic VPS disk speed</title><link>https://devendevour.iankulin.com/basic-vps-disk-speed/</link><pubDate>Sat, 09 Sep 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/basic-vps-disk-speed/</guid><description>&lt;p&gt;I couldn&amp;rsquo;t help but measure some VPS disk speeds while I was busting out the &lt;code&gt;fio&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/vps.png" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;Binary Lane only claims &amp;ldquo;pure SSD drives&amp;rdquo; but seems pretty great. The difference between Digital Ocean SSD and NVME is disappointing. Obviously you&amp;rsquo;re sharing a drive with other users, so perhaps this depends on what else is going on.&lt;/p&gt;</description></item><item><title>Testing Storage Speed</title><link>https://devendevour.iankulin.com/testing-storage-speed/</link><pubDate>Sun, 03 Sep 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/testing-storage-speed/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/shawnjooste_hero_image_welcome_playful_colorful_tech_company_co_5e8971cb-4cb0-4aa8-938a-610467b485c6.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;Now I&amp;rsquo;ve added NVME drives to my nodes, plus added an external NMVE RAID, I&amp;rsquo;ve got quite the collection of storage options. For one of my nodes, it looks like this:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/screen-shot-2023-07-23-at-1.20.34-pm.png" alt="Screenshot of Proxmox GUI showing 5 storage options" class="img-responsive"&gt; &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The 256GB NVME the OS is installed to&lt;/li&gt;
&lt;li&gt;The 512GB SSD, currently running ZFS&lt;/li&gt;
&lt;li&gt;The Synology NAS - 4 x 6TB drives in RAID 5 on a 1GB switch&lt;/li&gt;
&lt;li&gt;A pair of 256GB NVME sticks in an external USB3 enclosure set up as a mirrored ZFS pool.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For my dev VM&amp;rsquo;s I often set them up to have their storage on the NAS - it&amp;rsquo;s just super easy to move them around then. The production VM&amp;rsquo;s currently have their storage on the SSD (that machine hasn&amp;rsquo;t had the NVME upgrade yet), but obviously with all these options, it&amp;rsquo;d be interesting to think about what goes where.&lt;/p&gt;</description></item><item><title>SDD Wearout numbers</title><link>https://devendevour.iankulin.com/sdd-wearout-numbers/</link><pubDate>Tue, 25 Apr 2023 00:00:00 +0000</pubDate><guid>https://devendevour.iankulin.com/sdd-wearout-numbers/</guid><description>&lt;p&gt;&lt;img src="https://devendevour.iankulin.com/images/lionovich_computer_cries_because_of_dead_ssd_6149b1c0-005e-41d2-a912-eb864a307555.jpg" alt="" class="img-responsive"&gt; &lt;/p&gt;
&lt;p&gt;I didn&amp;rsquo;t understand why the default Proxmox install sets up the storage the way it does - with the available disk split up into an LVM and an LVM thin storage - so I&amp;rsquo;ve been reading this excellent &lt;a href="https://blog.programster.org/proxmox-storage-guide" target="_blank" rel="noopener"&gt;Proxmox Storage Guide&lt;/a&gt; by Programster (spoiler - the LVM thin makes VM snapshots easier).&lt;/p&gt;
&lt;p&gt;At one point in the post they mention that you can see the &amp;ldquo;Wearout&amp;rdquo; percentage for any SSD drives in the Proxmox GUI, so of course, since I now own five second hand HP Elitedesk 800 G1/G2&amp;rsquo;s all with SSD drives, I dived in to have a look at each drive and found this.&lt;/p&gt;</description></item></channel></rss>