Obligate disclaimer: I've alluded to this a few times, but I'm likely to express opinions on computer products and/or the software I work on -- so "I work for Dell Technologies, these are my personal opinions and are not those of nor approved by my employer." Legal wants that (or something close) for all public facing statements... so there we go.
Obligate Context / Alternatives: Before we get into why I didn't / don't choose to use them, know that there is no shortage of home NAS environment hardware and software. FreeNAS is apparently now TrueNAS Core, that's a pretty well used FreeBSD based OS, which does storage via ZFS. If you aren't into building your own systems and just need something speedy around the house - you may want to just look at something like this or this -- fast nvme storage with enough slots to have a decent amount of space is likely better from a power draw and heat perspective than traditional hard disk drives. (The Gen3 x1 PCIe lane dedicated to each m.2 drive means you shouldn't invest much in "latest" drives -- as I understand it, the network is going to bottleneck the throughput anyway so the drives can be relatively throttled compared to desktop or full server environments).
Anyway -- with all that out of the way pointing you at the "usual" NAS solutions... why am I such an oddball to be building my own? (Besides that I'm here... we're all already known oddballs, after all...)
1) I'm probably paranoid and secretive (I know, I know... no one else here is at all that way...) -- I don't want my personal data on other people's machines. Hence, I'm not backing up anything to The Cloud (no matter which cloud). So Requirement The First: Home machine backups.
This can be larger than you think -- I mess around with VMs in my personal environment, I run multiple OS's on my main machine -- and I have other machines to account for (a Mac Mini which serves my iTunes library to the AppleTVs in the house, my son's school computer, my son's non-school computer, wife's laptop, etc.... all of which I try to have backup silently and seamlessly to where I can full restore them if need be.... and that adds up).
At the moment -- we're talking 73TiB lying around -- granted, I could probably clean up much more aggressively, but 20TiB of it is just iTunes files, so it does add up.
2) Historically, I used Fedora Server for other things (okay, originally I ran OS/2 Warp as my router/dial up modem server -- that became a FreeNAS (don't remember the version) box at some point because the work VPN worked better with it, then Linux eventually because I'm more comfortable with LVM than I ever was with ZFS and the like. (More on that in a minute).
At the least, beyond NAS type stuff -- I want to: + Run Java/Minecraft Server(s) for my son. He was big on wanting some multi-player before I'd let him on the Internet, so a shared world for the two of us was the next best thing at the time. (He's moved past this, of course... but that's part of why I set it up the way I did).
+ Run KVM virtual machines -- I mess around with OS's for a living and for fun, and being able to spin up a virtual machine to participate in testing, check out a different UI / distro / whatnot is handy (and again, more on this in a minute). I do this sometimes with HyperV on Windows, but KVM sits on top of QEMU, which is much much much more flexible in the emulation (you can fake NUMA on non-NUMA machines for one thing -- and given my area of focus for several years was kernel memory allocation, that's important to me, on the recent front -- you can emulate NVMe drives that are really files on the hypervisor, including enabling stuff that's really hard to find in consumer grade drives like Persistent Memory Range support!) This was more of a "want" in the past -- but for the current build it became a "must", because of the next point.
+ Specifically wanted to set up a dogfood node for the storage product I work on. Since I specifically work on making this work on virtual platforms -- and since I'd had things pop up when people broke things doing stuff for "normal" hardware that was wrong for AWS (which uses Nitro, which is just a home-modified KVM realistically), I wanted to have something that was AWS-ish but with more variety. A mix of drive types and unlike temporary VMs spun up for specific purposes, actual data hosted on it. I wasn't nutty enough to port it directly to the hardware, but I could tweak things to get it to recognize the slightly different virtual environment.
+ Be power / heat cognizant. The build before this was a full tower case (Fractal Define 7, I believe) that I set up so I could put 12 drives in off of a SAS expander. I thought with the front fans I had enough air flow -- but I ended up with multiple data drives failing with unrecoverable errors after a couple of years, which I'm pretty sure was due to heat. I don't have the space nor the network cabling for a true server room - and California summers can be toasty. We do run the house A/C enough that I loathe PG&E prices with every fiber of my being, but I think the lack of airflow combined with some days when the ambient crept up was just too much for the drives. Hence this build tried to avoid that.
+ Hotplug. A must, must, must in my setups -- because drives fail or I expand capacity eventually, and I never want to take the whole system down (why would you?)
3) Probably more importantly than I'm letting on -- I've been building my own systems for decades now. It is fun, I enjoy being able to do things exactly as I want, etc. So most of the pre-built NAS options wouldn't be on my mind to start with... that they're not typically going to let me choose my OS (without some work anyway) or be good for VMs/Dockers/whatnot just reinforces things for me.
I also find that it would probably be useful to mention how I got to where I am today with things. I mentioned before that the prior build to this one was 12 drives in a Fractal 7 case -- that replaced a Lenovo ThinkServer TS440 8-bay home server. It has been literal years (according to Amazon, I bought that one in 2015 -- so forgive my lack of memory here), but I believe I purchased that to fulfill most of the requirements above and because we'd been using Lenovo workstations at work at the time and they seemed decent for the cash. It was somewhat enterprise (redundant power supplies... which was pointless since I didn't have them plugged into different UPS's or anything), the bays were hotplug, etc... but even with the fans I was getting heat spikes from time to time -- and the proprietary drive bays, motherboard, etc. annoyed me to no end when it came time to do upgrades. Hence I was back to "build your own" when I did the previous case -- and this is also why I tend to steer away from pre-built NAS solutions. I enjoy the freedom of changing out parts when and if I need to, for what I can find. I do wish there was more "mid-grade" style hardware between full on Server Room Enterprise stuff and cheap-o commercial grade... but I just try to shop as wisely as I can.
The reason I ramble on about this is that due to the repeated heat issues and to prepare to migrate to a solution -- my first step was picking up a Sabrent 10 bay 3.5" drive external enclosure and throwing some Seagate Exos 18TiB drives in it. Probably purely personal experience here -- but over the years, I'd tried Western Digital NAS and Enterprise drives as well as the Seagates -- and the Exos drives have been the most reliable for me once I get good ones. (I say that -- because I also have a track record of getting 1 or 2 bad ones out of 8 or 10 when I order to set things up. Bad as in "won't even spin up" bad typically. On the positive side -- Seagate has been good setting up RMAs ("Return Merchandise Authorization" -- basically return/exchange if you haven't mucked about with computer parts) via their web site, covering shipping of the faulty drives and sending me replacements promptly. So that's been nice.
The enclosure removes the drives from the rest of the system, gives a nice big heat sink / fan combo that's still pretty quiet and let's me plug the drives into whatever I see fit. I used that to copy the data over to a fresh BTRFS setup before building the new box and retiring the old, simplifying things immensely.
Which means I should probably talk about BTRFS now before going into the build.
When I started all this back with the Lenovo -- I knew I wanted to build the data store such that if I had drive failures I could recover from them. Those of you who know about this stuff already know what I'm about to say -- but for those who don't, this is typically done by placing copies of the data (at the disk block or cluster level, the small "pieces" of your data that are glued together into a file which is then presented as a single unit by your OS (kind of... I'm not verbose enough to digress into a description of file inodes, vnodes, sparse trees and whatnot!)). This can be done in different ways -- the truly hardcore / past way to do it was hardware RAID (Redundant Array of Inexpensive Disks -- not "Inexpensive" is relative here!). The disks would be plugged into a controller card which really was its own small computer -- tasked solely with managing such disks, with its own firmware interface so you could tell it how you wanted to do it, etc. The RAID controller would take the incoming I/O and send it to the appropriate underlying drives and correspondingly would present reads as if there was a single volume based on multiple disks. <a href="https://en.wikipedia.org/wiki/RAID">Wikipedia</a> isn't terrible if you want to read more. The advantage to hardware is that as long as you're moving both the controller card and the drives, it can move between systems (of course the OS has to support the card... quiet you hecklers!), it has its own memory and processor to do the operations so it tends to be faster and not interfere with the rest of the system, etc. With the multiple copies -- if one drive dies (or even starts to die), the controller can flag it, raise an alarm and stop using it in favor of other copies. How many copies (which implies how many drives you can lose before you've effectively lost your data) is a tradeoff since using the drives for copies means you lose total storage capacity.
Over the years, with processors becoming more powerful so the impact of such transactions are less felt and with people not wanting to pay for the hardware, software RAID has become more prevalent. There are various ways to do software RAID -- the ones that are pertinent here are LVM based RAID, BTRFS and ZFS.
When I originally built the Lenovo system I had a FreeBSD based solution in mind -- FreeNAS, most likely. Since the storage product I work on is FreeBSD based, this made sense to me at the time as it would give me a FreeBSD hands-on system to test patches, do kernel development, etc. Unfortunately, while I had looked into things like driver support for the TS440's disk controller card, etc.... I was not aware that FreeBSD did not play well with the power management firmware of the system -- every fan in the box was running at 100% with no way to throttle them. Since I most certainly wanted the fans there (just with dynamic throttling), I quickly decided to do a Linux installation. My Linux preferences are more on the RedHat side of things -- so Fedora Linux was my choice and remains so.
ZFS on Linux was not a thing at that time -- and I expected to use disks of various sizes at the time, which complicates ZFS setup for disk pools - so I was left with either buying hardware to change the controller card (not going to happen) or LVM. BTRFS at that time was barely stable for single disks, much less multiple disks. So that setup involved adding all the drives (physical volumes / pvs) to a volume group, then setting up logical volumes using raid options (which makes them do the drive mirroring and whatnot under the covers). This works fairly well, but can be a bit clunky to manage, especially moving drives across systems or when you're recovering from drive failure.
At some point in the last decade (again, sorry I don't remember specifics here), BTRFS became stable on multiple devices (to the point where Fedora actually uses it as the default file system). This allows me to have a filesystem setup automatically across the drives as RAID10 and mount it by mounting any of the drives or via UUID. Nice and simple since BTFS allows you to have drives of different size participating (it just gets interesting for the "spare size" and may not use the full drive if you for example have one big drive and a bunch of small ones since you can't mirror properly there), has commands to rebalance / migrate data, etc. You can also do interesting things with Copy-On-Write snapshots and whatnot, but to be honest that's not something I do often so I'll skip talking about it.
So -- with all that in mind, here's the chosen hardware with some reasoning as to why:
Case: Silverstone CS381 Micro-ATX/Mini-DTX/Mini-ITX. Wanted something smaller than the full tower but which still had good air flow. This had good reviews, 8 hot-swap drive bays with the SAS connectors already (just have to provide a low profile Host Bus Adapter (HBA from here on in)) and wasn't ludicrously priced.
Motherboard: MSI PRO B760M-A -- just looking for a decent basic mATX board here. I'd done Ryzen on the last build and hit a couple oddities, so was just going mid-range Intel this time. No overclocking or wildness... just enough to run stuff and stay reasonably cool.
Processor: (First attempt) Core i5 12400F -- didn't need to be overpowered or a full Xeon (wasn't going to look for a Xeon micro-ATX board, part of the point is to stay in easy to find / afford HW for the most part), but needed enough oomph and threads to run server loads and the VMs. Then I discovered that I couldn't fit even a simple graphics card into this setup with the HBA taking the PCIe x16 slot... and what the "F" meant. Oops.
(Second attempt) Core i5-12400 (no F). For those who also didn't know the Intel naming scheme at the time... that means this supports on-chip graphics so you don't have to use a discrete graphics card. Which is more than sufficient for basic setup / getting the system running -- administration is via web or ssh anyway.
Memory: 2x64GiB (2x32GiB) DDR4. (So 4 sticks of 32GiB DDR4 3200 -- I believe it was the maximum supported stable configuration, certainly I was going for as much memory as possible, since I planned on devoting a good portion of it to the VM(s).
Low-profile CPU cooler (i.e. "What fits the case")
Power Supply: Corsair SF750, SFX -- first time I used a "SFX" supply... honestly, I don't remember much about it. It works, what more do you want?
HBA: LSI Broadcom SAS 9300-8i: That's a lot of tech-ese for what amounts to "A low profile (so it fits in the case) adapter from PCIe to the SAS connections for the hot bays that can support 8 drives." Decent reviews, I did not and do not need to play around with flashing the firmware to try to get hardware RAID support (which some folks do with LSI and the like... the true enterprise versions of this stuff adds on-card cache, expanded firmware options, etc... that's all overkill for my needs). Obviously I got the cables as well.
The aforementioned USB enclosure with 10x 18TiB drives and my usual LAN data on them.
8x 20TiB drives to mess with in my VM.
2x Samsung 870 QVO 8TiB SSDs, again for the VM. (Part of the reason I chose this case, there are 2.5" SATA mounts above each 4-drive 3.5" bay, so it is easy to have a 10 drive setup of this type, 8 spinning rust HDDs and 2 SSDs for faster caching or whatnot (or yes, you could always use 3.5 to 2.5 adapters or something.... we know... sit down in the back there!))
1 Samsung 980 Pro m.2 NVMe (1 TiB) for the VM.
Originally, I bought a few USB-to-2.5GbE ethernet adapters (again for the VM) -- I don't remember the model right now. But they sucked in practice, so ended up getting a Intel i225-V PCIe card instead.
Assembly of the system was pretty straightforward -- there may have been gotchas, but after more than a year I don't remember many. As is typical for any system build - first I put the RAM, CPU and m.2 drives in the motherboard, got the motherboard in the case, plugged in the fans, put in the power supply and added that cabling, put in the HBA card and did its cabling, then did an initial bootup / test.
That was the point that I realized that I had no on-board video -- and that there was no suitable slot for a graphics card with the HBA in place as mentioned. So I got to quickly order the right CPU, wait a day or so for it to get here and take everything back apart enough (if I recall correctly, it wasn't terrible -- I could get to the cooler / CPU socket without taking more than the case off and the HBA back out) to swap out the CPU. Once I had that done, I could get into the BIOS and validate the system.
Since I again knew I'd be running VMs and wanted to pass through devices, one thing I made sure to do in the BIOS was to enable VT-i and VT-d (or whatever they're named nowadays, I think that was the nomenclature at the time). Basically "Allow the CPU to run a hypervisor" (run virtual machines at all) and "Allow I/O devices to be virtualized".
With the system up, I went ahead and installed Fedora Linux Server on it -- again, this being over a year ago, I honestly don't remember all the specifics, sorry. My gut instinct is that I booted from USB install media, installed the OS on it (so it created the EFI boot partition, pointed at the UUID of the new root partition, etc.). At the moment, these are XFS filesystems using LVM -- current Fedora is more likely to just make a boot partition and a big non-boot partition and then use BTRFS and set up different mount points within the shadow-root of that. Either will work. I believe I then just used a m.2 to USB-C adapter I have to attach the drive of the former NAS and used dd to copy the root and home partitions of the previous version. I'm nothing if not lazy sometimes -- and since I have the NAS configured how I like it, I really don't relish recreating all that. Then I edited /etc/fstab to make sure the UUID and mount points matched up, attached the USB enclosure for the /shares BTRFS main data store, and that pretty much got the data hosting part of the NAS done.
Now, if you've read this far -- you may feel gypped... "That tells me nothing!". And that's a fair point. So I've tried to think of what I'd be doing if I started completely from scratch.
That sequence would likely be something like this:
1) Install the Linux of your choice from USB media, preferably a Server variant of their distro if there is one (more likely to do remote administration, less likely to shove everything in Gnome and systemd). Ninety-nine times out of one hundred these days you won't need to worry about partition layout if you're installing on a new clean drive -- but just check to make sure it isn't messing with your intended data drives (unless you want that.. mirroring both data and OS across all the drives or something... if you go the TrueNAS route, they actually build the OS image on USB typically, and you boot from dual USB drives so they're machine independent, redundant and all your disks are data. Your call). I would caution you that if you plan to do more than data serving (run VMs, containers, do coding, whatnot) you should probably give a little more space to the root filesystem than most do by default - that's where system-wide programs are installed (/usr, /lib, /usr/lib) and resizing root after the fact can be fun. So plan ahead.
2) If available like on Fedora, install cockpit or something similar (Web based administration). Don't do this if you're going to make your NAS visible on the Internet (I know for some folks that's part of the point -- hosting your own files so you can access wherever... I'm not that type and prefer the security of the NAS not being visible behind the IP4 NAT routing and all). Once that's up, use it to configure your shares -- it is almost certainly simpler. Otherwise, you'll want to probably install a samba package if you're going to be sharing with Windows machines (because let's face it, most are). Purely Linux or Unix sharing is over NFS, which should be there practically by default in most distros. You can configure your /etc/samba/smb.conf file for your environment, what shares you want to expose, etc. (See why I recommended Cockpit here?)
3) Set up your data share as you wish. As mentioned, for me it was BTRFS. If you're doing TrueNAS - it will ask you ZFS questions and set up the disk pools for you. There are many ways -- I don't think it would be productive to list them all here. If you do go BTRFS and have many drives, I would recommend using crontab to set up a periodic rebalance and scrub -- I believe it helps avoid problems as well as giving early warning if there's a sector read problem (as things are scanned for these operations).
For example on my box: root@freenas:/home/dmorris# crontab -l 0 1 * * * flock -x /tmp/dmbtrfs.lck btrfs balance start -dusage=50 -dlimit=2 -musage=50 -mlimit=4 /shares 10 10 * * sun flock -x /tmp/dmbtrfs.lck btrfs scrub start -B -d -c 2 -n 4 /shares
(flock is a "file system based lock" -- it just keeps these from running another copy if there's already one running because it took longer than the time before the next one).
I'm currently on 10 drives as mentioned and around 165TiB of raw space -- so while my usage is much more backup / Write-Once, Read-Many and not terribly stressful... I can vouch for BTRFS scaling okay -- and as mentioned, I prefer it to ZFS because it handles heterogenous drive sets better... for example on my current setup with 9x18TiB and 1 20 TiB:
If I replace one of the 18's with another 20 (or larger... it is a matter of what is available when I either start running out of space or have a drive start to die), it will automatically start using the extra space from the "big drives" once there's enough of them to get mirroring. And I don't have to do anything other than swap drives.
4) You'll want some sort of monitoring setup. I would recommend something like smartmontools (to monitor the drives), lm-sensors (to monitor temps and any other sensors on your setup that it can recognize) and something like logwatch (which I've been using for ages... so is probably thought of as "old" and "no one uses!" but it continues to work for me). The last one generates a summary concatenation of the various logs and emails the summary where you wish. Having *that* available daily to me even when traveling so I know if something's going wrong is nice.
Once I had the new system functionally equivalent to the old system, it was time to configure my OneFS (PowerScale? Whatever we call it this week) dogfood VM. How I do it is more reflective of the fact that I've been messing with this sort of thing for a while now -- handcrafting VMs and emulating hardware I don't have to write kernel drivers. If you're just starting out, Cockpit has a VM plugin (section?) which can help you set up a basic VM, virt-manager is a decent starting GUI if you've set up VNC or other remote graphical sessions, etc. I tend to just make a XML file and use virsh to run the VM -- that allows me to craft the sections that talk to QEMU to what I need.
I'm not going to drill down into a full dissertation on modern virtualization (that would be a whole set of articles in itself!), but on Linux these days -- the OS is Linux that hosts the VMs and is called the Hypervisor. The Linux kernel has built-in support for acting as a Hypervisor via a model termed (unsurprisingly) Kernel Virtual Machines (KVM). QEMU is an emulation system which knows how to talk to KVM and set up virtual machines. It is, like a lot of tools -- very powerful, but more than a little complex to set up. So most folks use a layer above it called "libvirt" (Virtualization Library... I know, obvious) which keeps track of machine details in XML, presents things nicely, has a virtual management shell (virsh), etc... and translates things into QEMU for you. And allows you to craft the XML so you can have most of your VM defined "simply" but pass through QEMU commands for more complex operations.
For this VM, I wanted it to have as full control as possible of the SAS controller (so it would get all 8 drives in the internal bays), one of the NVMe's (OneFS is a journaling file system -- it is best run with a full on non-volatile memory module, but those are really hard to get in my price range/class of hardware... so I can get by with using a software journal on a dedicated NVMe. Doesn't help against all cases, but that's the tradeoff), the i225-V PCIe card (it also requires an internal and external network because it is designed to be a cluster file system... I just only have a single node at the moment) and the SATA controller (for the 2 8TiB SSDs which I wanted to experiment with using for caching).
To do this, you need to do two things -- detach them from use by the Hypervisor, and configure the XML to take control of them. For Fedora (or other Red Hat-ish distros), the "lspci" tool is your friend here... it gives you the PCI bus/address mapping for the detected PCI devices. You can then use "virsh nodedev-detach" to remove them from the Hypervisor, making them available for the Guest, and the Guest XML can claim them via a PCI definition.
For example, the I225-V card shows as 2 PCI devices when I run lspci:
Which tells the guest to get the 06:00.0 and 07:00.0 devices from the host and map them to 0:0:3:0 and 0:0:13:0 respectively. And if we check on the Guest VM (once it is up, obviously -- and FreeBSD uses pciconf instead):
The igc driver is the FreeBSD driver for this card, and instances 0 and 1 are where we expect. From the Guest OS's perspective, it simply has this hardware -- because the physical ranges are mapped into the Guest OS memory so it can access/control them. I find it neat... but I'm a kernel nerd, after all.
I won't (probably shouldn't) get into what all I had to do to get my slightly-customized dogfood OS up and running, sorry. The important thing here is that if you plan your hardware you can use VMs to host things completely differently, experiment, etc. pretty easily. Especially these days as more and more m.2 NVMe is being used -- if those drives aren't hidden in some fashion (behind an enclosure or something), they'll show up as individual PCI devices, allowing you to allocate them individually to given VMs and customizing your storage environment as you see fit.
Just a Sentinel starship - hoping that each jump will be the jump home...
130 Comments
SDF-7
on November 21, 2024 at 11:04 am
Dangnabbit — missed the Wikipedia RAID link getting mangled. Ah well, that’s on me. Thanks again to Tonio for helping whip this into shape — might as well say to anyone contemplating it that first time submittal at least (later ones where I have to figure out the block editor myself may be a different story!) was made fairly painless by the support of the staff. If you’re contemplating throwing content at the site, go for it!
And now y’all can rip my rambling to shreds…..
SDF-7
on November 21, 2024 at 11:19 am
I also forgot until it was much to late to call it out in the article, but one down side to the USB enclosure (at least on the motherboard I have, though I strongly suspect it is going to be true on most/all of them) — periodically, especially under heavier I/O load the USB3 controller just loses its mind (my money is on running out of buffers or similar poor I/O structure management) and starts timing out. It isn’t a showstopper, because at least in Linux it ends up just resetting the controller and things are fine again for a while (like once a week or so typically). Other than kernel log spam and slow I/Os when it happens, fairly harmless. Your mileage may vary, of course.
Whoa. Um … I’m VERY interested in having a home setup like this, but my attention span is dizzy now.
SDF-7
on November 21, 2024 at 11:10 am
Told y’all I ramble… in 20/20 hindsight I probably should have used something other than Notepad++ to write this or figured out how to change the font in the Block Editor… it did end up more than a little “Wall O Text.” I gave Tonio the option to break it up into about 5 parts — but I suspect he didn’t want to inflict that on the commentariat.
R C Dean
on November 21, 2024 at 1:34 pm
Yeah, that font is not eyeball-friendly.
I have zero problems drafting in Word and dropping the whole thing into Block Editor. On an iMac, even! I am mystified by the struggles people have.
whiz
on November 21, 2024 at 1:49 pm
Ditto
Dr Mossy Lawn
on November 21, 2024 at 11:26 am
I used to build my own linux servers.. redhat, centos.. then went to using Apple Xserve, until Apple made it clear that business use of their systems was dead.
It runs linux, allows VM’s, Docker, email, web, backups (including Apple Time Machine)
They are capable, and well supported.
SDF-7
on November 21, 2024 at 11:31 am
I did include links to a couple of products in Paragraph 2…. I find the 8 bay m.2 one intriguing personally — though it probably is more than folks want to spend. I think you’d get a nice balance of speed and future proofing out of it. Their OS looks interesting as well. But yeah, Synology is certainly an option. I just get tired of being locked into hardware — hence still rolling my own.
Good news was since I was on the iPad reader mode put it in a proportional font!
SDF-7
on November 21, 2024 at 11:34 am
Well now I’m hoping for a kickback from Tim Cook or something! 😉
Ownbestenemy
on November 21, 2024 at 11:14 am
Ill be in my bunk
SDF-7
on November 21, 2024 at 11:14 am
Careful — look what happened to NotAdahn.
PieInTheSky
on November 21, 2024 at 11:24 am
Ok good one
Ownbestenemy
on November 21, 2024 at 11:27 am
I just got done applying patches and had to go through every machine to swap out BIOS batteries. I made sure to edge my way through this article
SDF-7
on November 21, 2024 at 11:39 am
Hopefully those BIOS batteries were the plug in adapted type in server racks so they were easy to just pull out and replace instead of fiddly cr2032s or whatnot.
Ownbestenemy
on November 21, 2024 at 11:42 am
You do remember I work for the government…only saving grace it is a redundant system so I can take down a quarter of it and not impact a damn thing. At least the rack mounted were easy enough. Logically disconnect them, init 5 em, swap batteries, bring back and ensure date was good and move on. The ones in place at AT positions…are/were the pain.
The Late P Brooks
on November 21, 2024 at 11:21 am
This is not my area of expertise. I am impressed.
Tundra
on November 21, 2024 at 11:33 am
Ditto. I’m pretty comfortable with machinery, but this might as well be sorcery.
Thanks, Mr. 7!
SDF-7
on November 21, 2024 at 11:38 am
I suppose to quote ST:TWoK as I often do… “We learn by doing”. It really isn’t all that much or all that arcane other than the QEMU passthrough / emulation stuff for the VM. But you are quite welcome and thank y’all for reading.
Fourscore
on November 21, 2024 at 1:41 pm
Like Sgt Schultz, “I know nothing”
I can hardly send an Email and I’m not too good at that either.
Sean
on November 21, 2024 at 11:30 am
I surf the web on a 7 year old Fire HD 10 tablet and I don’t recall the last time I turned on my Windows 7 machine.
>.>
Ownbestenemy
on November 21, 2024 at 11:33 am
I think ‘surf the web’ needs to be depreciated into something more modern. “Waded through a pile of shit” conveys what most of front facing modern web is nowadays.
Husband is still on Windows 7 and he loves it but programs are starting to withdraw support, most notably Adobe because Adobe is asshoe. He’s white-knuckling it at this point. Cold, dead hands, etc.
We have a desktop on Win 7, on which husband also built a Hackintosh. That desktop hasn’t been used in years.
I’m on Win 10 and I’m terrified of 11, although there are now workarounds that maybe I can get back what I’m used to. Anyway, I had to get Microsoft 365 for its Outlook functionality (for my transcription job) and it changed my Office standalone installation somewhat. It did ask me if I wanted to upgrade. I said no. It changed some things anyway. I had to reprogram all my hotkey macros, for one. Secondly, the interface looks different. I DIDN’T NEED THAT. LEAVE MY FUCKING STUFF ALONE!!! Why do you “update” things when you’re just tweaking UI? Does your program have no other improvements to make that the only thing you can change is the UI?!
Anyway. Once upin a time, I updated my WordPress. Broke EVERYTHING. Plugins I relied on didn’t work anymore. So, because I have lots of very old utilities I use, I’m terrified of them breaking. I don’t need support. I just need them not to break with an update.
B.P.
on November 21, 2024 at 12:00 pm
But, but… Think of all the jobs at software companies that involve pushing out upgrades that reconfigure a perfectly workable layout into something less intuitive!
kinnath
on November 21, 2024 at 11:32 am
I am not going to read this today. I am bogged down at work.
But I am glad it is here, and I intend to come back to it.
So, thanks for the article.
SDF-7
on November 21, 2024 at 11:36 am
You’re welcome and thank you for the kind interest. I really wasn’t sure if anyone would be interested in my ramblings beyond a barbaric yawp “Meh.”
EvilSheldon
on November 21, 2024 at 12:26 pm
Whitman? I’m impressed!
SDF-7
on November 21, 2024 at 12:41 pm
Don’t be. 😉 I’m of the age that probably everyone in my peer group watched Dead Poet’s Society at least once. You can’t come out of that without that bit of Whitman stuck in your head.
(I have read more over the years… but honestly, that’s likely why that particular bit is easy to access — embarrassing as it is…)
Part of the reason I run prebuilt solutions is the validation.
When you build one of these how much testing to you do before putting it into service? For example I’d like to fill it up to at least 1/3rd full and simulate some drive and/or controller failures and rebuild and recover.
That can take days I’d rather spend doing something more fun. Like watching paint dry…
SDF-7
on November 21, 2024 at 11:45 am
I tend to run the new one in parallel with the old during data migration and setup — which can take a couple of weeks depending on how I do it (in this case since I migrated the data to the USB enclosure setup first [and I guess from now on assuming I can stick with that], it was much shorter… but it did let me stress the drives in the USB enclosure while just copying over the data from the “normal”/old drives).
Being only used for personal data and limited/personal stuff — if it suddenly crapped out on me:
1) I probably have compatible parts lying around or at least the prior model I can bring back in a pinch while I work out next steps
2) I don’t need 5 9’s of uptime or anything. If I lose a day of being able to backup, it won’t kill me.
Yeah, I’m not fully enterprise level QA, sorry. And I do get what you’re saying — that’s part of why I went with the Lenovo server at one point. But non-rack servers are getting more rare all the time (and as mentioned, I don’t have the cabling, power or spare room for a rack server room… and they’re too loud to have under my desk like my current one is) and being locked into proprietary pieces (power supply, the drive trays, various other things) just annoyed me too much… I wanted the freedom to throw parts together at will.
But this is Glibs! There is no One True Way! 😉 Just sharing my thoughts.
Hell, just moving from AMD AM4 to AM5 and DDR5 is pissing me off.
I wasted at least 8 hours trying to figure out how to run my rated 6000 memory modules at nothing more than 5200.
That’s the max stock speed. Pay no mind the modules are XOC rated and the AMD written bios supports XOC and the spec is AMD created.
I’d just like to use my PC and not reboot to bios recovery for the 10th time. Hence my preference for validated solutions. I’ll suck it up on the PC side since I’ve been doing this since roughly 1980 with a TRS-80 Model 1.
SDF-7
on November 21, 2024 at 12:08 pm
Hmm… wish I had some advice for you on that. When I did the same, I just made sure to use only modules explicitly listed on the memory compatibility because even on AM4 I had DOCP issues the first time I tried without worrying about it. Annoying that I can’t find any supported config more than 64GB, but there it is. “Good luck — (we’re all counting on you!)”, I guess.
I got it to run finally. DOCP was flaky but XOC II will work. Partly my fault for not realizing how long fucking “memory training” takes n the new platform.
Also PBO generates crazy power use and temps for under 5% improvement all core and damn near nothing under regular use. Unlike AM4 where there was better improvement for the costs. I’ve been much happier with it off and as a result memory issues went away as well.
I’m pretty strict on stability so lots of memory testing for many hours. I’ll do this for my PCs, but I draw the line at my NAS…
SDF-7
on November 21, 2024 at 12:24 pm
Oh come on — doesn’t everyone love waiting 5 minutes before getting BIOS recognition crossing your fingers that it is training and not stuck in a no-post loop? 😉
(In other words… I feel ya, Sensei… really glad once it got trained and think long and hard before BIOS updates because it will do that again…)
I’ve not had good experiences trying overclocking on AM4 or AM5 really. Probably don’t throw enough cooling at it… but very much am in the “It seems fast enough — I’d rather stick with it being stable” camp as well. I don’t remember trying *anything* exotic on the NAS build for that matter (other than enabling VT-i / VT-d) for just that reason. No reason to try to push the envelope there.
R C Dean
on November 21, 2024 at 1:47 pm
Eyelids . . . getting . . . heavy . . . .
It’s probably just karmic payback for my OCD ramblings on obscure legal shit, though.
The Late P Brooks
on November 21, 2024 at 11:42 am
I suppose to quote ST:TWoK as I often do… “We learn by doing”. It really isn’t all that much or all that arcane other than the QEMU passthrough / emulation stuff for the VM. But you are quite welcome and thank y’all for reading.
And maybe Ron Desantis as AG after he appoints Gaetz?
Certified Public Asshat
on November 21, 2024 at 11:57 am
Gaetz pulls himself
Keeping his hands to himself.
Ownbestenemy
on November 21, 2024 at 11:58 am
Hmm. Canary in the coal mine candidate? To probe the amount of ‘resistance’ will be incurred?
The Other Kevin
on November 21, 2024 at 12:01 pm
I really prefer the AG from Missouri, or maybe from Texas. Someone who has recently argued in front of the SC against the deep state. Hopefully we get one of those.
Post a couple who are pretty unlikely to get approved – let him pull it (he’s covering his own ass anyways) – and then get some leverage on more important billets.
Yeah, we got us some good ones. Our outgoing governor was awesome and he was the Lt governor for Eric Greitens, who got pushed out. The one time a Lt governor pick was crucial, and Greitens came through. FWIW, I liked Greitens too.
Missouri as a state was not on board with Prohibition, but IIRC, fedgov threatened military action if they didn’t sign. That was part of what made KC’s flouting of it possible. Other places in MO did too (StL to a much lesser extent), but because of that, KC survived the Depression better than most.
Sean
on November 21, 2024 at 12:01 pm
I’m disappointed.
Not Adahn
on November 21, 2024 at 12:31 pm
Eh, did we really need a Butt-Head lookalike in government?
JaimeRoberto (carnitas/spicy salsa)
on November 21, 2024 at 1:26 pm
I thought the problem was that he had a lot of others pulling it for him.
Ha! I was just typing out that my XX wanted a Subaru Outback (although she says she’s not a lesbian because I asked), but she didn’t know why. She got a Hyundai Sonata instead. We like those around Chez Mojeaux.
kinnath
on November 21, 2024 at 12:31 pm
My wife and I were taking a Permit to Carry class when the instructor made a snide comment about Birkenstock-wearing, Subaru-driving liberals. My wife raised her hand and said she drover a Subaru and owned many pairs of Birkenstocks. The instructor was caught off guard and had to back pedal. It was a fun moment.
EvilSheldon
on November 21, 2024 at 12:37 pm
It’s amazing to me how few instructors (in a sensitive subject matter, no less) can learn to keep their fucking cakeholes shut about politics.
Had to break down abd buy Birks for my plantar fasciitis. It was my last resort because Birks are too ugly to be that expensive. That was at least 10 years ago and I haven’t worn anything BUT Birks since. I still think they’re butt ugly, though.
Tundra
on November 21, 2024 at 12:46 pm
My (straight) daughter is on her second Outback. She knew nothing of the connotation. Further, she didn’t believe me that the lesbian thing was actually embraced by the marketing people at Subaru.
Great cars, whichever team you play for.
EvilSheldon
on November 21, 2024 at 12:51 pm
I’ve had a few Subarus. Great cars built around horrible powerplants.
The Other Kevin
on November 21, 2024 at 12:58 pm
Mrs. TOK has been wearing Birks since we got engaged. She’s had foot problems for years and expensive shoes are the only thing that really helps.
kinnath
on November 21, 2024 at 12:59 pm
horrible powerplants.
Really? I always thought the 4 cylinder boxer worked pretty well for the size car they supported.
SDF-7
on November 21, 2024 at 1:01 pm
the 4 cylinder boxer
Sounds like a real dog of an engine.
Tundra
on November 21, 2024 at 1:02 pm
There were a few years where they were notorious for head gaskets. But for the most part I think the motors are solid.
Only took Subaru a decade or so to find a head gasket that holds.
That said it looks like that seems fixed.
Dr Mossy Lawn
on November 21, 2024 at 1:03 pm
I had a 1999 Subaru outback (6 cyl).. it was a rattletrap after 100K miles. Also a minimum amount of light towing required me to replace the torque converter (Class 1 hitch). I didn’t buy a second one.
Not Adahn
on November 21, 2024 at 1:16 pm
I’ve only got 110k on mine (WRX/STI), but the engine is fine. Tires are a fucking money pit though.
Thanks for the article; looks interesting, but a full perusal will wait until after work – looks like it requires some focus and these bastards at work seem to expect focus on what I’m being paid for – of course, no one, including myself, seems to really know that that is…
fast nvme storage with enough slots to have a decent amount of space
I’ve build many a hardware RAID but not terribly recently. For any decent size array, cost was prohibitive and, at least in earlier iterations, MTTF was sufficiently shitty, much like horoscopes, that ongoing maintenance cost was higher than I’d like as well. For workstations, I now build with solid state only for system drives and use traditional platters for data storage. At home, I might be more willing to experiment with solid state storage since nothing would be terribly critical, but I also don’t really need speed so much (and I’m not sure speed on the drive would be the bottleneck anyway?). Does the quality and durability (and cost) of solid state now make them competitive with platter drives? I’ll note that my system nvme’s seem to be pretty robust over the last couple of years; don’t have anything 15 years old still running like for platters (selection bias), but seem better than what they were.
SDF-7
on November 21, 2024 at 12:19 pm
Yeah — in my experience m.2 drives in the commercial range are at least 4-5 years lifetime, a NAS type setup where you have more of a Write Once, Read Many should be good for keeping them alive longer as well. If you can find/afford enterprise m.2 drives, I suspect it’d be even better — but I wouldn’t bother.
One thing about m.2’s, they’re under pretty active development and the cost curve relative to capacity is going down over time, so by the time drives start to have problems with their flash – you’re probably well off getting something a lot bigger for the price.
And yeah — from what I’ve read, the network speed will dominate your bandwidth anyway, so the m.2 speed shouldn’t be the limiter.
First step is admitting you have a problem. Thus begins the largest mental health crisis in American history.
EvilSheldon
on November 21, 2024 at 12:49 pm
Poor Bluesky. Less than a week from the ‘platform of hope’, to ‘vile, racist, and evil.’
Still, this is an important lesson for them – no matter how far you bend the knee, no matter how completely you abase yourself, the Progressives will never, ever be satisfied.
Of note: The out-of-place eccentric is accepted by and feels at home with the low-born, declasse crowd.
Nephilium
on November 21, 2024 at 2:09 pm
It’s a much easier social situation to get along in, that’s for sure.
Allen
on November 21, 2024 at 1:24 pm
Super interesting! My home servers are running on Raspberry Pis with Kali Linux as the base– I’m a security person and am constantly poking and prodding things. This setup sounds fascinating and if I had more capable server hardware, I’d like to lean into significant virtualization within my environment.
Read through this quickly, will have to go back through and see what I can learn from it. Thanks for putting this together SDF-7!
I had an electronics tech come in and give me a hand with one of my robots what was being a jerk. He was really helpful, so I submitted him to get an award ($50). It was denied, because he had already received too many awards.
I can see how it would be possible to abuse an award system but at the same time… hopefully he gets some kind of notice so he can put “maxed out the possible awards I could get” on his year-end review.
I should have expected that — but for some reason was expecting this instead.
Do you have a contact for his immediate supervisor/manager, Not Adahn? A discreet email noting that you’d award him if you could and appreciated him might ensure it gets carried into his review — and at the least, I would expect an “internal customer” good review would, as it were.
Of course — you doubtless already know that and I’m preaching to the choir again — but hey, you brought it up!
Not Adahn
on November 21, 2024 at 2:16 pm
It’s bolted into place to keep it out of mischief.
I’d say buy that man a drink but it would cut against his quest. I wish him the best of luck, but I doubt this suit will accomplish what he hopes.
The plaintiff is asking for declaratory and injunctive relief to overturn the current federal ban on distilling alcoholic beverages at home for personal and family consumption. Mr. Ream wants to produce rye and bourbon and has no plans to sell or offer it to the public. He believes that this prohibition exceeds the powers of Congress under both Article I and the 10th Amendment.
Nephilium
on November 21, 2024 at 1:57 pm
There was a ruling about this a while back (at least at the Federal Court level). Several states have removed (or never had) bans on home distillation.
Unless things have changed the only “Western” country that allows home distillation is New Zealand.
SDF-7
on November 21, 2024 at 2:00 pm
The complaint in John Ream correctly notes, “If Congress can prohibit home distilling, it can prohibit home bread baking, sewing, vegetable gardening, and practically anything else.” How did Congress get the authority to regulate virtually anything under the auspices of commerce?
He’s not wrong — but Chief Justice Penaltax seemed close to “The government can make you eat broccoli” so who knows what’s beyond the purview of the FedGov these days?
Plus — whiskey rebellions don’t have a good track record of success.
R C Dean
on November 21, 2024 at 2:08 pm
“How did Congress get the authority to regulate virtually anything under the auspices of commerce?”
By threatening to arrest, imprison, and if necessary, shoot anyone who objected?
Dangnabbit — missed the Wikipedia RAID link getting mangled. Ah well, that’s on me. Thanks again to Tonio for helping whip this into shape — might as well say to anyone contemplating it that first time submittal at least (later ones where I have to figure out the block editor myself may be a different story!) was made fairly painless by the support of the staff. If you’re contemplating throwing content at the site, go for it!
And now y’all can rip my rambling to shreds…..
I also forgot until it was much to late to call it out in the article, but one down side to the USB enclosure (at least on the motherboard I have, though I strongly suspect it is going to be true on most/all of them) — periodically, especially under heavier I/O load the USB3 controller just loses its mind (my money is on running out of buffers or similar poor I/O structure management) and starts timing out. It isn’t a showstopper, because at least in Linux it ends up just resetting the controller and things are fine again for a while (like once a week or so typically). Other than kernel log spam and slow I/Os when it happens, fairly harmless. Your mileage may vary, of course.
Whoa. Um … I’m VERY interested in having a home setup like this, but my attention span is dizzy now.
Told y’all I ramble… in 20/20 hindsight I probably should have used something other than Notepad++ to write this or figured out how to change the font in the Block Editor… it did end up more than a little “Wall O Text.” I gave Tonio the option to break it up into about 5 parts — but I suspect he didn’t want to inflict that on the commentariat.
Yeah, that font is not eyeball-friendly.
I have zero problems drafting in Word and dropping the whole thing into Block Editor. On an iMac, even! I am mystified by the struggles people have.
Ditto
I used to build my own linux servers.. redhat, centos.. then went to using Apple Xserve, until Apple made it clear that business use of their systems was dead.
Instead of going back to bare metal, I bought into the Synology line. https://www.synology.com/en-us/products/DS923+
It runs linux, allows VM’s, Docker, email, web, backups (including Apple Time Machine)
They are capable, and well supported.
I did include links to a couple of products in Paragraph 2…. I find the 8 bay m.2 one intriguing personally — though it probably is more than folks want to spend. I think you’d get a nice balance of speed and future proofing out of it. Their OS looks interesting as well. But yeah, Synology is certainly an option. I just get tired of being locked into hardware — hence still rolling my own.
My preferred appliance as well.
I don’t need lots of performance or memory so I don’t have to pay for high power hardware versions.
I’ve been happy with my Synology NAS as well, added benefit of Plex having a server mode built for it so I can just install it on the NAS.
I, too, threw money at the problem and just got a Diskstation.
I’ve been quite happy with it.
I’ve got thousands of Linux boxes at work to worry about, I’m disinclined to want to sysadmin my home stuff.
Nowhere in the article did I claim sanity…..
I think I’ve gone blind.
Look, NA — what you and your left hand have been up to isn’t my fault… 😉
Too much home serving?
*applause*
I hate the font and I am not sure I can read the whole thing. And while I get the libertarian appeal, I have to say NEEEEEERDDDDD
I’m a unix/linux kernel engineer. You aren’t telling me anything I don’t know, Pie.
I will truly try to figure out a better font format if I come up with another article.
hows the money in that sort of work?
https://www.glassdoor.com/Salaries/kernel-engineer-software-engineer-salary-SRCH_KO0,33.htm
^^^ Look at this low-key flex
Good news was since I was on the iPad reader mode put it in a proportional font!
Well now I’m hoping for a kickback from Tim Cook or something! 😉
Ill be in my bunk
Careful — look what happened to NotAdahn.
Ok good one
I just got done applying patches and had to go through every machine to swap out BIOS batteries. I made sure to edge my way through this article
Hopefully those BIOS batteries were the plug in adapted type in server racks so they were easy to just pull out and replace instead of fiddly cr2032s or whatnot.
You do remember I work for the government…only saving grace it is a redundant system so I can take down a quarter of it and not impact a damn thing. At least the rack mounted were easy enough. Logically disconnect them, init 5 em, swap batteries, bring back and ensure date was good and move on. The ones in place at AT positions…are/were the pain.
This is not my area of expertise. I am impressed.
Ditto. I’m pretty comfortable with machinery, but this might as well be sorcery.
Thanks, Mr. 7!
I suppose to quote ST:TWoK as I often do… “We learn by doing”. It really isn’t all that much or all that arcane other than the QEMU passthrough / emulation stuff for the VM. But you are quite welcome and thank y’all for reading.
Like Sgt Schultz, “I know nothing”
I can hardly send an Email and I’m not too good at that either.
I surf the web on a 7 year old Fire HD 10 tablet and I don’t recall the last time I turned on my Windows 7 machine.
>.>
I think ‘surf the web’ needs to be depreciated into something more modern. “Waded through a pile of shit” conveys what most of front facing modern web is nowadays.
Husband is still on Windows 7 and he loves it but programs are starting to withdraw support, most notably Adobe because Adobe is asshoe. He’s white-knuckling it at this point. Cold, dead hands, etc.
We have a desktop on Win 7, on which husband also built a Hackintosh. That desktop hasn’t been used in years.
I’m on Win 10 and I’m terrified of 11, although there are now workarounds that maybe I can get back what I’m used to. Anyway, I had to get Microsoft 365 for its Outlook functionality (for my transcription job) and it changed my Office standalone installation somewhat. It did ask me if I wanted to upgrade. I said no. It changed some things anyway. I had to reprogram all my hotkey macros, for one. Secondly, the interface looks different. I DIDN’T NEED THAT. LEAVE MY FUCKING STUFF ALONE!!! Why do you “update” things when you’re just tweaking UI? Does your program have no other improvements to make that the only thing you can change is the UI?!
Anyway. Once upin a time, I updated my WordPress. Broke EVERYTHING. Plugins I relied on didn’t work anymore. So, because I have lots of very old utilities I use, I’m terrified of them breaking. I don’t need support. I just need them not to break with an update.
But, but… Think of all the jobs at software companies that involve pushing out upgrades that reconfigure a perfectly workable layout into something less intuitive!
I am not going to read this today. I am bogged down at work.
But I am glad it is here, and I intend to come back to it.
So, thanks for the article.
You’re welcome and thank you for the kind interest. I really wasn’t sure if anyone would be interested in my ramblings beyond a barbaric
yawp“Meh.”Whitman? I’m impressed!
Don’t be. 😉 I’m of the age that probably everyone in my peer group watched Dead Poet’s Society at least once. You can’t come out of that without that bit of Whitman stuck in your head.
(I have read more over the years… but honestly, that’s likely why that particular bit is easy to access — embarrassing as it is…)
That movie is required autumn viewing.
Part of the reason I run prebuilt solutions is the validation.
When you build one of these how much testing to you do before putting it into service? For example I’d like to fill it up to at least 1/3rd full and simulate some drive and/or controller failures and rebuild and recover.
That can take days I’d rather spend doing something more fun. Like watching paint dry…
I tend to run the new one in parallel with the old during data migration and setup — which can take a couple of weeks depending on how I do it (in this case since I migrated the data to the USB enclosure setup first [and I guess from now on assuming I can stick with that], it was much shorter… but it did let me stress the drives in the USB enclosure while just copying over the data from the “normal”/old drives).
Being only used for personal data and limited/personal stuff — if it suddenly crapped out on me:
1) I probably have compatible parts lying around or at least the prior model I can bring back in a pinch while I work out next steps
2) I don’t need 5 9’s of uptime or anything. If I lose a day of being able to backup, it won’t kill me.
Yeah, I’m not fully enterprise level QA, sorry. And I do get what you’re saying — that’s part of why I went with the Lenovo server at one point. But non-rack servers are getting more rare all the time (and as mentioned, I don’t have the cabling, power or spare room for a rack server room… and they’re too loud to have under my desk like my current one is) and being locked into proprietary pieces (power supply, the drive trays, various other things) just annoyed me too much… I wanted the freedom to throw parts together at will.
But this is Glibs! There is no One True Way! 😉 Just sharing my thoughts.
Hell, just moving from AMD AM4 to AM5 and DDR5 is pissing me off.
I wasted at least 8 hours trying to figure out how to run my rated 6000 memory modules at nothing more than 5200.
That’s the max stock speed. Pay no mind the modules are XOC rated and the AMD written bios supports XOC and the spec is AMD created.
I’d just like to use my PC and not reboot to bios recovery for the 10th time. Hence my preference for validated solutions. I’ll suck it up on the PC side since I’ve been doing this since roughly 1980 with a TRS-80 Model 1.
Hmm… wish I had some advice for you on that. When I did the same, I just made sure to use only modules explicitly listed on the memory compatibility because even on AM4 I had DOCP issues the first time I tried without worrying about it. Annoying that I can’t find any supported config more than 64GB, but there it is. “Good luck — (we’re all counting on you!)”, I guess.
I got it to run finally. DOCP was flaky but XOC II will work. Partly my fault for not realizing how long fucking “memory training” takes n the new platform.
Also PBO generates crazy power use and temps for under 5% improvement all core and damn near nothing under regular use. Unlike AM4 where there was better improvement for the costs. I’ve been much happier with it off and as a result memory issues went away as well.
I’m pretty strict on stability so lots of memory testing for many hours. I’ll do this for my PCs, but I draw the line at my NAS…
Oh come on — doesn’t everyone love waiting 5 minutes before getting BIOS recognition crossing your fingers that it is training and not stuck in a no-post loop? 😉
(In other words… I feel ya, Sensei… really glad once it got trained and think long and hard before BIOS updates because it will do that again…)
I’ve not had good experiences trying overclocking on AM4 or AM5 really. Probably don’t throw enough cooling at it… but very much am in the “It seems fast enough — I’d rather stick with it being stable” camp as well. I don’t remember trying *anything* exotic on the NAS build for that matter (other than enabling VT-i / VT-d) for just that reason. No reason to try to push the envelope there.
Eyelids . . . getting . . . heavy . . . .
It’s probably just karmic payback for my OCD ramblings on obscure legal shit, though.
I suppose to quote ST:TWoK as I often do… “We learn by doing”. It really isn’t all that much or all that arcane other than the QEMU passthrough / emulation stuff for the VM. But you are quite welcome and thank y’all for reading.
Yeah, that’s not how my brain works.
O/T: Gaetz pulls himself.
https://x.com/mattgaetz/status/1859649045553402285
Someone told OMB what he did?
This makes no sense. Unless getting him into the senate via Rubio’s seat was part of the plan.
Too 4D chess for my beat up brain.
Thus smoothing RFK Jr’s confirmation.
I was just thinking that too.
And maybe Ron Desantis as AG after he appoints Gaetz?
Gaetz pulls himself
Keeping his hands to himself.
Hmm. Canary in the coal mine candidate? To probe the amount of ‘resistance’ will be incurred?
I really prefer the AG from Missouri, or maybe from Texas. Someone who has recently argued in front of the SC against the deep state. Hopefully we get one of those.
Post a couple who are pretty unlikely to get approved – let him pull it (he’s covering his own ass anyways) – and then get some leverage on more important billets.
Yeah, we got us some good ones. Our outgoing governor was awesome and he was the Lt governor for Eric Greitens, who got pushed out. The one time a Lt governor pick was crucial, and Greitens came through. FWIW, I liked Greitens too.
https://x.com/AGAndrewBailey/status/1859656364882264446
From Missouri you say?
Missouri as a state was not on board with Prohibition, but IIRC, fedgov threatened military action if they didn’t sign. That was part of what made KC’s flouting of it possible. Other places in MO did too (StL to a much lesser extent), but because of that, KC survived the Depression better than most.
I’m disappointed.
Eh, did we really need a Butt-Head lookalike in government?
I thought the problem was that he had a lot of others pulling it for him.
Back into my comfort zone:
https://bringatrailer.com/listing/1984-audi-4000s-quattro-3/
Would.
Do you think there are any cars from the past 10 years people will eventually collect?
I seriously doubt it. At least normal cars. Too full of electronics.
Possibly some of the Porsches?
Lexus GX 460. But that’s for a niche offroading/overlander crowd.
I expect the Land Rover Defender 110, that group has a long history of supporting the platform. They even let us range rover types tag along.
I could see the higher end muscle cars (Challenger, Mustang) become collectible, especially the last model years before they go electric.
Mk7.5 GTIs
/biased
🙂
British Icons | Assess and Caress with Donald Osborne and Jay Leno | Jay Leno’s Garage
No androgynous people wearing bright weird clothes. And 3 cars I’d own if I had the funds.
That Jag is just so damn sexy.
I had an ’85 5000 imported from Canada.
Did it accelerate unintentionally?
I’m still angry about that bullshit.
Nope. It was roomy, turbo, 5 speed, and fun to drive.
That differential lock control panel is giving me a chubby…
No kidding. The whole instrument cluster is analog pr0n.
So Jaguar goes full retard and destroys their brand. Let’s check in on Volvo:
https://x.com/HuinGuillaume/status/1859472963323510995
Bravo, Volvo.
pretty fucking amazing ad.
And back to their roots. Pro-safety, pro-family.
A very smart move.
I suppose Subaru can counter with getting back to their roots with power lesbians.
made me want to go make babies.
Ha! I was just typing out that my XX wanted a Subaru Outback (although she says she’s not a lesbian because I asked), but she didn’t know why. She got a Hyundai Sonata instead. We like those around Chez Mojeaux.
My wife and I were taking a Permit to Carry class when the instructor made a snide comment about Birkenstock-wearing, Subaru-driving liberals. My wife raised her hand and said she drover a Subaru and owned many pairs of Birkenstocks. The instructor was caught off guard and had to back pedal. It was a fun moment.
It’s amazing to me how few instructors (in a sensitive subject matter, no less) can learn to keep their fucking cakeholes shut about politics.
Had to break down abd buy Birks for my plantar fasciitis. It was my last resort because Birks are too ugly to be that expensive. That was at least 10 years ago and I haven’t worn anything BUT Birks since. I still think they’re butt ugly, though.
My (straight) daughter is on her second Outback. She knew nothing of the connotation. Further, she didn’t believe me that the lesbian thing was actually embraced by the marketing people at Subaru.
Great cars, whichever team you play for.
I’ve had a few Subarus. Great cars built around horrible powerplants.
Mrs. TOK has been wearing Birks since we got engaged. She’s had foot problems for years and expensive shoes are the only thing that really helps.
horrible powerplants.
Really? I always thought the 4 cylinder boxer worked pretty well for the size car they supported.
Sounds like a real dog of an engine.
There were a few years where they were notorious for head gaskets. But for the most part I think the motors are solid.
Only took Subaru a decade or so to find a head gasket that holds.
That said it looks like that seems fixed.
I had a 1999 Subaru outback (6 cyl).. it was a rattletrap after 100K miles. Also a minimum amount of light towing required me to replace the torque converter (Class 1 hitch). I didn’t buy a second one.
I’ve only got 110k on mine (WRX/STI), but the engine is fine. Tires are a fucking money pit though.
Sounds like a real dog of an engine.
But it packs quite a punch.
Well done and what a contrast.
My attention span isn’t that long for an ad.
Uh, I missed the part where I see a car.
Probably why Marketing was my worst class in business school – it just seemed like elaborate bullshit to me.
It’s the car that didn’t run over the wife.
Thanks for the article; looks interesting, but a full perusal will wait until after work – looks like it requires some focus and these bastards at work seem to expect focus on what I’m being paid for – of course, no one, including myself, seems to really know that that is…
fast nvme storage with enough slots to have a decent amount of space
I’ve build many a hardware RAID but not terribly recently. For any decent size array, cost was prohibitive and, at least in earlier iterations, MTTF was sufficiently shitty, much like horoscopes, that ongoing maintenance cost was higher than I’d like as well. For workstations, I now build with solid state only for system drives and use traditional platters for data storage. At home, I might be more willing to experiment with solid state storage since nothing would be terribly critical, but I also don’t really need speed so much (and I’m not sure speed on the drive would be the bottleneck anyway?). Does the quality and durability (and cost) of solid state now make them competitive with platter drives? I’ll note that my system nvme’s seem to be pretty robust over the last couple of years; don’t have anything 15 years old still running like for platters (selection bias), but seem better than what they were.
Yeah — in my experience m.2 drives in the commercial range are at least 4-5 years lifetime, a NAS type setup where you have more of a Write Once, Read Many should be good for keeping them alive longer as well. If you can find/afford enterprise m.2 drives, I suspect it’d be even better — but I wouldn’t bother.
One thing about m.2’s, they’re under pretty active development and the cost curve relative to capacity is going down over time, so by the time drives start to have problems with their flash – you’re probably well off getting something a lot bigger for the price.
And yeah — from what I’ve read, the network speed will dominate your bandwidth anyway, so the m.2 speed shouldn’t be the limiter.
I run m2 on my rig with a platter for backup and 2nd tier storage. I’ve had good luck on m2, but I’m not crazy write intensive.
One thing I recently read and did not know or confirm is that m2 unpowered after a few years can lose data. Can anybody confirm?
https://www.pcworld.com/article/427435/death-and-the-unplugged-ssd-how-much-you-really-need-to-worry-about-ssd-reliability.html says yes — most of what I’ve been seeing based on searching says commercial grade drives should be spec’d to retain at least for 1 year regardless, and ones that haven’t hit endurance limits (because of having to go to the “spare” cells and potentially confuse the controller when it finally gets power again from what I can tell) more like 10.
So not going to ride out the apocalypse, no…. periodic flushes to spinning rust for critical data/backups is probably a good idea. Yay.
Rob Reiner chooses inpatient treatment for his TDS:
https://pjmedia.com/matt-margolis/2024/11/21/rob-reiner-reports-hes-checking-into-the-cuckoos-nest-n4934504
First step is admitting you have a problem. Thus begins the largest mental health crisis in American history.
Poor Bluesky. Less than a week from the ‘platform of hope’, to ‘vile, racist, and evil.’
Still, this is an important lesson for them – no matter how far you bend the knee, no matter how completely you abase yourself, the Progressives will never, ever be satisfied.
Bud Light REALLY learned its lesson.
“You a ranch girl?”
LOL!
ok, now we’re talking
Of note: The out-of-place eccentric is accepted by and feels at home with the low-born, declasse crowd.
It’s a much easier social situation to get along in, that’s for sure.
Super interesting! My home servers are running on Raspberry Pis with Kali Linux as the base– I’m a security person and am constantly poking and prodding things. This setup sounds fascinating and if I had more capable server hardware, I’d like to lean into significant virtualization within my environment.
Read through this quickly, will have to go back through and see what I can learn from it. Thanks for putting this together SDF-7!
For the writers here:
https://x.com/DylanoA4/status/1859273800027603281
Everyone approaches it different.
I am so sorry she finds it painful.
Not any more. She’s dead.
That didn’t save the intern in the morning links…
“She’s dead.”
Let me know if there is any change in her condition.
Since when has that stopped anyone?
I had an electronics tech come in and give me a hand with one of my robots what was being a jerk. He was really helpful, so I submitted him to get an award ($50). It was denied, because he had already received too many awards.
I can see how it would be possible to abuse an award system but at the same time… hopefully he gets some kind of notice so he can put “maxed out the possible awards I could get” on his year-end review.
Is this the robot?
I should have expected that — but for some reason was expecting this instead.
Do you have a contact for his immediate supervisor/manager, Not Adahn? A discreet email noting that you’d award him if you could and appreciated him might ensure it gets carried into his review — and at the least, I would expect an “internal customer” good review would, as it were.
Of course — you doubtless already know that and I’m preaching to the choir again — but hey, you brought it up!
It’s bolted into place to keep it out of mischief.
https://www.youtube.com/watch?v=8oZaJ0VEVtU
I had already expressed thanks to his boss. I figured cash would be more appreciated.
I’d say buy that man a drink but it would cut against his quest. I wish him the best of luck, but I doubt this suit will accomplish what he hopes.
There was a ruling about this a while back (at least at the Federal Court level). Several states have removed (or never had) bans on home distillation.
Unless things have changed the only “Western” country that allows home distillation is New Zealand.
He’s not wrong — but Chief Justice Penaltax seemed close to “The government can make you eat broccoli” so who knows what’s beyond the purview of the FedGov these days?
Plus — whiskey rebellions don’t have a good track record of success.
“How did Congress get the authority to regulate virtually anything under the auspices of commerce?”
By threatening to arrest, imprison, and if necessary, shoot anyone who objected?