• 1 Post
  • 96 Comments
Joined 1 month ago
cake
Cake day: June 9th, 2024

help-circle
  • Two things, I think, that are making your view and mine different.

    First, the value of time. I like self-hosting things, but it’s not a 40 hour a week job. Docker lets me invest minimal time in maintenance and upkeep and restricts the blowback of a bad update to the stack it’s in. Yes, I’m using a little bit more hardware to accomplish this, but hardware is vastly cheaper than my time.

    Second, uh, this is a hobby yeah? I don’t think anyone posting here needs to optimize their Nextcloud or whatever install to scale to 100,000 concurrent users that required 99.999999% uptime SLAs or anything. I mean yes, you’d certainly do things differently in those environments, but that’s really not what this is.

    Using containers simplifies maintaining and deploying, and a few percent of cpu usage or a little bit of ram is unlikely to matter, unless you’re big into running everything on a Raspberry Pi Zero or something.


  • In fairness the motherboard not restricting power usage isn’t a bad thing: it’s not like it’s shoving 4000w through the cpu, it’s just letting the cpu pull as much as it wants which, with a non-defective piece of silicon, is probably fine.

    A modern CPU shouldn’t pull enough power that it kills itself, unless there’s a major failure in design or manufacturing.

    Sure, the CPU gets hotter with more power and sure, the last 5% of performance is a third of the total power usage and probably not worth chasing, but them’s the design decisions x86 vendors are making right now and the motherboard (assuming it can deliver enough clean power) shrugging and saying ‘whatever’ is, outside factors aside, fine.

    Also, that 253w TDP limit on a i7 or i9 is a bit low. Yes Intel’s spec says that, but intel lies like crazy on power usage, and pretty much always has. These are chips that will happily gain performance up to about 400w of total draw, so capping at half that is a bit of a kneecapping, though it MIGHT keep them from failing as fast, but who knows.


  • I’d argue the opposite: it’s made it where I care very little about the dependencies of anything I’m running and it’s LESS of a delicate balancing act.

    I don’t care what version of postgres or php or nginx or mysql or rust or node or python or whatever a given app needs, because it’s in the container or stack and doesn’t impact anything else running on the system.

    All that matters at that point is ‘does the stack work’ and you then don’t need to spend any time thinking about dependencies or interactions.

    I also treat EACH stack as it’s own thing: if it needs a database, I stand one up. If it needs some nosql it gets it’s own.

    Makes maintenance of and upgrades to everything super simple, since each of the ~30 stacks with ~120 containers I’m running doesn’t in any way impact, screw with, or have dependency issues that impact anything else I’m running.

    Though, in fairness, if you’re only running two or three things, then I could see how the management of the docker layer MIGHT be more time than management of the applications.


  • The Cloudflare certs CN are yourdomain.com and *.yourdomain.com, so enumeration via that means is not really going to lead anywhere.

    Cloudflare is resistant to, but not totally immune from, DNS enumeration. But, frankly, the attack profile you’re concerned about isn’t likely to have the resources to do proper enumeration: automated bots are going to guess a static list of hosts, not spend time trying to scrape data out of DNS.

    TLDR: don’t use common service names, don’t use common activity (don’t use jellyfin. or movies. or media. or tv. and so forth) and make sure your host responds with something aggressively fuck-off for non-matched requests. For nginx, the default_server site configuration I’m using sends what’s likely real humans to a rickroll, and everybot else gets a 444 which is nginx for go fuck yourself.



  • Wake-on-LAN is probably what you want, if your specific hardware supports it which it probably does. This is a case of figuring out your exact hardware and a little RTFM-ing about how to enable and use WOL.

    As for the drives it, in theory, would add more load/unload cycles to them and thus reduce their lifespan. But, in the real world, that almost certainly doesn’t matter, unless you’re turning the system on and off every 5 minutes: modern drives expect to go in and out of power saving modes and most controllers (especially usb enclosures!) do this pretty aggressively, so a couple more load cycles more-or-less are unlikely to actually cause your drive to fail any quicker than it would anyways.


  • It’s pair of 16gb 6000mt/s sticks that i just run at stock 4800mt, primarily because the BIOS fails to post every 3rd or so time, shits itself, and resets to defaults. I’ve quit fucking with it because, frankly, it’s fast enough and going into the bios requires a 2nd reboot and memory retrain, which will fail 50% of the time, and lead to the bios resetting itself, which leads to needing to reconfigure it which…

    When the system is up, it’s perfectly stable, and stays fine through sleep states and whatever else until I have to reboot it for whatever reason (updates, mostly).

    But honestly, if the memory controller can’t handle dual-channel 4800mt/s ram, then it’s really really fucked, because that’s the bare minimum in terms of support.

    I’d also add I have 3 mobile AMD based devices with DDR5, none of which exhibit ANY of this nonsense. Makes me think their desktop platform may well be legitimately defective, given how many people have this issue, and how it doesn’t seem to be universal across even their own product stacks.

    (And, yes, two of the mobile devices have removable ram, so it’s not some soldered vs dimm thing)



  • The last generation has been a total mess for both Intel and AMD.

    AMD had motherboards frying CPUs, crazy stupid post issues due to DDR5 memory training (and my personal build fails to post like 25% of the time due to this exact same stupid shit), and just generally less than a totally reliable experience compared to previous gens.

    Intel has much the same set of problems on their 13/14th gen stuff: dead chips, memory training issues, instability.

    Wonder if it’s just a fluke that both x86 vendors are having a shitty generation at the same time, or if something else is at play.


  • Back in the super early era of computers, they were stupidly expensive. One solution was to hook up a lot of people to a single computer via a computer terminal, which were much cheaper.

    Basically it would allow you to deploy a ton of monitors and keyboards to access a single computer relatively cheaply, and UNIX was the OS that (mostly) was used for this.

    You noticed that your console session is called ‘TTY-a-number’. Well, TTY stands for ‘teletypewriter’ which was the very first incarnation of this, and was what was in use when the name of the console was a made, and it’s just… never been changed, though tty devices and their later serial consoles are quite dead as far as tech goes.

    Enabling a serial terminal in Linux is a one-line change, and you can then use any terminal emulator you’d want to connect over it, but eh, it’s a pretty dead technology and nobody uses that at this point.

    Since I seem to be dumping useless retro facts all over the place: you could do this with DOS, and Digital Research released Concurrent DOS to allow multi-tasking, multi-user access to a DOS system. If you wanted to fiddle with that in the modern era, you’d want the Novell Multiuser DOS rebrand, since it supports vt100 emulation and thus can be used with basically any serial terminal app unlike the previous versions which emulated specific HP and IBM serial terminals.


  • Here’s some useless retro crap: the 8" disks almost certainly cost more.

    There was absolutely a reliability scale as you moved down the physical disk size: the 8 inch disks were more reliable than the 5.25, which were more reliable than the 3.5, due in large part to the market was requiring them to get cheaper as the drive tech matured, and thus you ended up with cheaper, less reliable media as you went along through the evolution.

    40 years later, 8" disks nearly all read, 5.25" disks mostly all read, and 3.5" disks are an absolute crapshoot: I find that less than half of the ones I come across are still actually accessible, compared to about 90% of the 5.25" and 8" ones.

    That experience of course has some personal bias, but that’s a story that holds pretty true talking to anyone who tries to archive old floppies, so it’s probably reasonable enough to use for actual data.

    So, if you wanted the most reliable option, and since the military probably doesn’t care what it costs, you’d pick a 8" disk.







  • I mean, it wasn’t a shockingly large amount of software or anything, but they always had a good selection of software.

    The store opened here in like 1993 or 1994, and they always had a full selection of OSes and software for them: Dos, Windows, OS/2, Linux, BeOS, and so on.

    Still open and still a cool place, but mostly just computer hardware bits and a section full of games and maker stuff now and not really any more software.



  • I’m going to argue that, yeah, probably, but it depends.

    Are you at risk of just losing your personal data, or is this hosting services other people upload shit to?

    If you’ve got other people’s photos or documents or passwords or whatever, then no. You need more than one backup, you need to automate testing of your backups, and you need to make damn sure that you can absolutely recover from BOTH sets of backups.

    If it’s just your shit, then you do what you’re comfortable with: if you lost your home server and it’s backups, then are you okay with that outcome?

    If that’s a ‘no’, then you need more than the one backup, and testing, and automation blah blah blah.

    I have the live server data, archives on a different drive in the system, and archives uploaded to the cloud.

    About once a week or so I burn the local backup files to a BD-R, chuck that in a media-rated fire safe (an aside: a paper-rated fire safe is not sufficient for plastic disks, so make sure you buy one that actually will keep your backups from being melted otherwise, meh, you didn’t really do anything).

    The cloud versions are on a provider that claims 99.99999% durability, which is good enough, and I keep 60 days of backups in the cloud so that I have enough versions to rotate back through.

    I also built a 2nd little baby server that’ll grab the backups and do an automated restore and stand up my entire stack once a month, just to verify that the backup archives are actually backups and actually can be downloaded, unarchived, and automatically bring up all the stacks and populate the databases and have everything just appear up and running.