

Decentralized (no abuse of power and doesn’t have a single point of failure)
There is a direct server though, is it federated? The readme doesn’t say it’s federated at all
Little bit of everything!
Avid Swiftie (come join us at !taylorswift@poptalk.scrubbles.tech )
Gaming (Mass Effect, Witcher, and too much Satisfactory)
Sci-fi
I live for 90s TV sitcoms


Decentralized (no abuse of power and doesn’t have a single point of failure)
There is a direct server though, is it federated? The readme doesn’t say it’s federated at all


Summarizing:
Which, all sounds fine. However, who defines all of these things? Who defines graphic imagery? Is it straight up porn? Is it a meme that has a naughty word in it? Is it anime? We have no idea.
Also as a server owner (who has moved everyone to Matrix now), I pretty much had to set my private server to age restricted, otherwise the server gets penalized if one of my users uploads something that is “graphic imagery” which again, is defined by them. The only safe way previously ways to say “Hey, you should be over 18 to join this” and then I’m not liable. I worry about a lot of servers taking the easy way and saying the same.


Neither do I, but from other platforms it’s only a matter of time until you’re prompted for it. You join a server where someone has posted something NSFW before, or the server admin “to be safe” set it to adults only, or they just clamp down on the definition of adults only. At some point I fully expect it’ll force me to.


nerd herd
I understood that reference!
I’ve heard positive things about Dito, if I was doing it over again I think I’d start there


Best I’ve done is try to minimize the hassle by hosting myself, and giving a how-to guide. My group is small, I’ve been sitting down individually with them showing them how to migrate


Revolt (now Stoat) is an option, personally I don’t like it because there is no federation and it’s text-only.
I’m using Matrix, with Element, which does have encrypted voice chat if you set it up. I want federation because I don’t want my users to be locked into only my server. However, the bar for entry is higher.


If we don’t warn them about it they’ll obviously be unhappy they should knowww!
(In reality they get confused, they leave, and then they keep using the corpo controlled version)


Can confirm, I host Matrix (homeserver synapse) and Element. Voice is a pain to get set up but I hear there are other matrix services which will do this for you easier. It’s a process though. You can get text chat up in a day, voice is going to be a bit after that, a lot of tinkering.


Please remember as people are looking for places to go to be kind. The reason Linux took so long to “take off” is because us linux folks (myself included) were proud of the high bar to entry. Encourage people to be curious, help them learn the ropes, don’t bog them down with technical details or the battles that have been taking place on X technology over Y technology. Offer an open easy place for them to come talk.
Once they get the foot in the door for a few months we can show them how awesome Linux is!


If you’re only at 2 nodes, then I think host paths with node selectors are what you should go with. That gets you up and running in the short term, but know that the conversion later to something like Longhorn will be a process. (Creating the volumes, then copying all the data over, ensuring correct user access, etc).


So you have a classic issue of datastorage on kubernetes. By design, kubernetes is node-agnostic, you simply have a pile of compute resources available. By using your external hard drive you’ve introduced something that must be connected to that node, declaring that your pod must run there and only there, because it’s the only place where your external is attached.
So you have some decisions to make.
First, if you want to just get it started, you can do a hostPath volume. In your volumes block you have:
volumes:
- name: immich-volume
hostPath:
path: /mnt/k3s/immich-media # or whatever your path is
The gotcha is that you can only ever run that pod on the node with that drive attached, so you need a selector on the pod spec.
You’ll need to label your node with something like kubectl label $yourNodeName anylabelname=true, like kubectl label $yourNodeName localDisk=true
Then you can apply a selector to your pod like:
spec:
nodeSelector:
localDisk=true
This gets you going, but remember you’re limited to one node whenever you want data storage.
For multi-node and true clusters, you need to think about your storage needs. You will have some storage that should be local, like databases and configs. Typically you want those on the local disk attached to the node. Then you may have other media, like large files that are rarely accessed. For this you may want them on a NAS or on a file server. Think about how your data will be laid out, then think about how you may want to grow with it.
For local data like databases/configs, once you are at 3 nodes, your best bet with k3s is Longhorn. It is a HUGE learning curve, and you will screw up multiple times as a warning, but it’s the best option for managing tiny (<10GB) drives that are spread across your nodes. It manages provisioning and making sure that your pods can access the volumes underneath, without you managing nodes specifically. It’s the best way to abstract away not only compute, but also storage.
For larger files like media and linux ISOs, then really the best option is NFS or block storage like MinIO. You’ll want a completely separate data storage layer that hosts large files, and then following a guide like this you can enable mounting of NFS shares directly into your pods. This also abstracts away storage, you don’t care what node your pod is running on, just that it connects to this store and has these files available.
I won’t lie, it’s a huge project. It took about 3 months of tinkering for me to get to a semi-stable state, simply because it’s such a huge jump in infrastructure, but it’s 100% worth it.


I was a hardcore Thomas the tank engine nerd.
And Theodore Tugboat.


Helm has worked well for me, what’s the problem you had?


Nextcloud implements webdav, which you can use rclone to mount as a remote.
Also many distros have an online account option which does the same thing


Meanwhile clamping down on usage and pricing. Seems they’re speed running enshittification.


I do selfhost my own, and even tried my hand at building something like this myself. It runs pretty well, I’m able to have it integrate with HomeAssistant and kubectl. It can be done with consumer GPUs, I have a 4000 and it runs fine. You don’t get as much context, but it’s about minimizing what the LLM needs to know while calling agents. You have one LLM context that’s running a todo list, you start a new one that is charge of step 1, which spins off more contexts for each subtask, etc. It’s not that each agent needs it’s own GPU, it’s that each agent needs it’s own context.


The fact that it’s only swarming, all agile workers know that swarming means for a week or two. This is just to shut people up


I am and do, I have no qualms with AI if I host it myself. I let it have read access to some things, I have one that is hooked up to my HomeAssistant that can do things like enable lighting or turn on devices. It’s all gated, I control what items I expose and what I don’t. I personally don’t want it reading my emails, but since I host it it’s really not a big deal at all. I have one that gets the status of my servers, reads the metrics, and reports to me in the morning if there were any anomalies.
I’m really sick of the “AI is just bad because AI is bad”. It can be incredibly useful - IF you know it’s limitations and understand what is wrong with it. I don’t like corporate AI at scale for moral reasons, but running it at home has been incredibly helpful. I don’t trust it to do whatever it wants, that would be insane. I do however let it have read permissions (and I know you keep harping on it, but MCP servers and APIs also have permission structures, even if it did attempt to write something, my other services would block it and it’d be reported) on services to help me sort through piles of information that I cannot manage by myself. When I do allow write access it’s when I’m working directly with it, and I hit a button each time it attempts to write. Think spinning up or down containers on my cluster while I am testing, or collecting info from the internet.
AI, LLMs, Agentic AI is a tool. It is not the hype every AI bro thinks it is, but it is another tool in the toolbelt. To completely ignore it is on par with ignoring Photoshop when it came out, or Wysiwyg editors when they came designing UIs.
The I learned the hard way years ago that that the one thing the company doesn’t want is a loud employee. It doesn’t matter what the issue is, who it’s about, if I can get the company in trouble or put some in an awkward position they don’t want you.
That’s why HR is such a dangerous place to go, they’re not looking out for you, they’re looking out for the company. By going to HR at all. You just prove that you are someone who can and will speak out, which is bad for the company. Whether that’s complaining about a co-worker or something the company is doing, it doesn’t matter, it shades you in a negative light because you are willing to speak out