I’m in the process of wiring a home before moving in and getting excited about running 10g from my server to the computer. Then I see 25g gear isn’t that much more expensive so I might was well run at least one fiber line. But what kind of three node ceph monster will it take to make use of any of this bandwidth (plus run all my Proxmox VMs and LXCs in HA) and how much heat will I have to deal with. What’s your experience with high speed homelab NAS builds and the electric bill shock that comes later? Epyc 7002 series looks perfect but seems to idle high.

  • Saik0@lemmy.saik0.com
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    5 months ago

    5 node proxmox cluster (each node on 40gbps networking[yes ceph…], ~80TB of SSD storage, 180cores, ~630GB of ram total)
    1 slow storage node (~400TB)
    2x opnsense servers in HA
    2x icx7750s
    2x icx7450s

    PoE to all the things… and 8gbps internet.

    Usually run ~15-17amps. So about 2000 watts. It’s my baby datacenter.

    Sometime this month I’ll be installing 25000kwh solar system on my roof and batteries.

    As far as heat goes… It’s in the garage with an insulated door, heat pump water heater, and there’s a tripplite ac unit in the bottom of the rack. The waste air(from the a/c) exhausts outside through a direct vent in the wall. The garage is downright tolerable to me for extended periods of time. The servers don’t complain at all.

    Reading about all you guys being under 200w or whatever makes me wonder if it’s worth it. Then I realize that the cost to do even a 1/4 of what I do in the cloud is more expensive than buying my solar.

    Power costs for the rack is about $100-120 a month. If it wasn’t for solar.

    Edit: 75 LXC containers, 22VMs.

    • MangoPenguin@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 months ago

      Damn that’s a setup alright!

      If you’re making use of the hardware it’s well worth it over anything cloud based for sure.

    • 486@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 months ago

      Edit: 75 LXC containers, 22VMs.

      That’s a lot of power draw for so few VMs and containers. Any particular applications running that justify such a setup?

      • Saik0@lemmy.saik0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 months ago

        That’s total draw of the whole rack. No indicative of power per vm/lxc container. If I pop onto management on a particular box it’s only running at an average of 164 watts. So for all 5 processing nodes it’s actually 953 watts (average over the past 7 days). So if you’re wanting to quantify it that way, it’s about 10W per container.

        Truenas is using 420 watts (30 spinning disks, 400+TiB raw storage…closer to 350 usable. Assuming 7 watts per spinning drive were at 210Watts in disks alone, spec sheet says 5 at idle and 10 at full speed). About 70 watts per firewall. Or 1515 for all the compute itself.

        The other 1000-ish watts is spent on switches, PoE (8 cameras, 2 HDHR units, time server and clock module,whatever happens to be plugged in around the house using PoE). Some power would also be lost to the UPS as well because conversions aren’t perfect. Oh and the network KVM and pullout monitor/keyboard.

        I think the difference here is that I’m taking my whole rack into account. Not looking at the power cost of just a server in isolation but also all the supporting stuff like networking. Max power draw on an icx7750 is 586Watts, Typical is 274 according to spec sheet. I have 2 of them trunked. Similar story with my icx7450s, 2 trunked and max power load is 935W each, but in this case specifically for PoE. Considering that I’m using a little shy of 1k on networking I have a lot of power overhead here that I’m not using. But I do have the 6x40gbps modules on the 7750.

        With this setup I’m using ~50% of the memory I have available. I’m 2 node redundant, and if I was down 2 nodes I’d be at 80% capacity. Enough to add about 60GB more of services before I have to worry about shedding load if I were to see critical failures.

      • Saik0@lemmy.saik0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        On the Sata SSD ceph storage. That’s just live stuff on the containers/vms. I’m at 20% usage of the 70TiB usable at the moment. I don’t use it all that heavily. Because of the way ceph works it’s really ~23 TiB of usable space and ~4.5 TiB written since it writes 3 copies in my cluster.

        On the slow storage node it’s running Truenas with 28 spinning disks at 16TB each. 2 hot spares, and 2 ssds each for cache, log, and metadata (eating up total of 36 bays). That’s 342.8TiB usable after raidz nonsense. And I’m 56% usage. I have literally everything I’ve done that I cared to save from like 2005 or 2006 or so. Backups for the ceph storage (PBS). Backups for computers I’ve had over the years. Lots of linux ISOs(105TiB) archived, including complete sets of gaming (37TiB) variants. Oh and my full steam library as well which currently sits at 14TiB. Flashpoint takes up a few TiB as well…