• 0 Posts
  • 396 Comments
Joined 1 year ago
cake
Cake day: July 7th, 2023

help-circle
  • I don’t think there’s anything inherently wrong with the idea of using a GUI, especially for a non-professional who mostly just wants to get into self-hosting. Not everyone has to learn all the ins and outs of every piece of software they run. My sister is one of the least technical people in the world, and she has her own Jellyfin server. It’s not a bad thing that this stuff has become more accessible, and we should encourage that accessibility.

    If, however, you intend to use these tools in a professional environment, then you definitely need to understand what’s happening under the hood and at least be comfortable working in the command line when necessary. I work with Docker professionally, and Dockge is my go to interface, but I can happily maintain any of my systems with nothing but an SSH connection when required. What I love about Dockge is that it makes this parallel approach possible. The reason I moved my organization away from Portainer is precisely because a lot of more advanced command line interactions would outright break the Portainer setup if attempted, whereas Dockge had no such problems.


  • The thing is, those poor design decisions have nothing to do with those features, i claim that every feature could be implemented without “holding the compose files hostage”.

    Yes, this is exactly my point. I think I’ve laid out very clearly how Portainer’s shortcomings are far more than just “It’s not for your use case.”

    Portainer is designed, from the ground up to trap you in an ecosystem. The choices they made aren’t because it’s necessary to do those things that way in order to be a usable Docker GUI. It’s solely because they do not want you to be able to easily move away from their platform once you’re on it.


  • Not the point. If you want to interact with the compose files directly through the command line they’re all squirelled away in a deep nest of folders, and Portainer throws a hissy fit when you touch them. Dockge has no such issues, it’s quite happy for you to switch back and forth between command line and GUI interaction as you see fit.

    It’s both intensely frustrating whenever it comes up as an issue directly, and speaks to a problem with Portainer’s underlying philosophy.

    Dockge was built as a tool to help you; it understands that it’s role is to be useful, and to get the fuck out of the way when its not being useful.

    Portainer was built as a product. It wants to take over your entire environment and make you completely dependent on it. It never wants you to interact with your stacks through any other means and it gets very upset if you do.

    I used Portainer for years, both in my homelab and in production environments. Trust me, I’ve tried to work around its shortcomings, but there’s no good solution to a program like Portainer other than not using it.


  • Please don’t use Portainer.

    • It kidnaps your compose files and stores them all in its own grubby little lair
    • It makes it basically impossible to interact with docker from the command line once it has its claws into your setup
    • It treats console output - like error messages - as an annoyance, showing a brief snippet on the screen for 0.3 seconds before throwing the whole message in the shredder.

    If you want a GUI, Dockge is fantastic. It plays nice with your existing setup, it does a much better job of actually helping out when you’ve screwed up your compose file, it converts run commands to compose files for you, and it gets the fuck out of the way when you decide to ignore it and use the command line anyway, because it respects your choices and understands that it’s here to help your workflow, not to direct your workflow.

    Edit to add: A great partner for Dockge is Dozzle, which gives you a nice unified view for logs and performance data for your stacks.

    I also want to note that both Dockge and Dozzle are primarily designed for homelab environments and home users. If we’re talking professional, large scale usage, especially docker swarms and the like, you really need to get comfortable with the CLI. If you absolutely must have a GUI in an environment like that, Portainer is your only option, but it’s still not one I can recommend.





  • But I don’t think even that is the case, as they can essentially just “swap out” the video they’re streaming

    You’re forgetting that the “targeted” component of their ads (while mostly bullshit) is an essential part of their business model. To do what you’re suggesting they’d have to create and store thousands of different copies of each video, to account for all the different possible combinations of ads they’d want to serve to different customers.



  • Comparitively speaking, a lot less hype than their earlier models produced. Hardcore techies care about incremental improvements, but the average user does not. If you try to describe to the average user what is “new” about GPT-4, other than “It fucks up less”, you’ve basically got nothing.

    And it’s going to carry on like this. New models are going to get exponentially more expensive to train, while producing less and less consumer interest each time, because “Holy crap look at this brand new technology” will always be more exciting than “In our comparitive testing version 7 is 9.6% more accurate than version 6.”

    And for all the hype, the actual revenue just isn’t there. OpenAI are bleeding around $5-10bn (yes, with a b) per year. They’re currently trying to raise around $11bn in new funding just to keep the lights on. It costs far more to operate these models (even at the steeply discounted compute costs Microsoft are giving them) than anyone is actually willing to pay to use them. Corporate clients don’t find them reliable or adaptable enough to actually replace human employees, and regular consumers think they’re cool, but in a “nice to have” kind of way. They’re not essential enough a product to pay big money for, but they can only be run profitably by charging big money.





  • It’s the second one. They are all in on this AI bullshit because they’ve got nothing else. There are no other exponential growth markets left. Capitalism has gotten so autocanibalistic that simply being a global monopoly in multiple different fields isn’t good enough. For investors it’s not about how big your company is, how reliable your yearly returns are, how stable your customer base; the only thing that matters is how fast your business is growing. But small businesses have no space to grow because of the monopolies filling every available space, and the monopolies are already monopolies. There are no worlds left to conquer. They’ve already turned every single piece of our digital lives into a subscription, blockchain was a total bust, the metaverse was a demented fever dream, VR turned out to be a niche toy at best; unless someone comes up with some brand new thing that no one has ever heard of before, AI is the last big boondoggle they have left to hit the public with.



  • Voroxpete@sh.itjust.workstoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    1
    ·
    edit-2
    15 days ago

    Personally I think it’d be interesting to see this per capita, so here’s my back of a napkin math for data centers per 1 million pop (c. 2022):

    • NL - 16.78
    • US - 16.15
    • AU - 11.72
    • CA - 8.63
    • GB - 7.68
    • DE - 6.22
    • FR - 4.63
    • JP - 1.75
    • RU - 1.74
    • CN - 0.32

    Worth noting of course that this only lists the quantity of discrete data centers and says nothing about the capacity of those data centers. I think it’d be really interesting to break down total compute power and total storage by country and by population.

    I’d also be interested to know what qualifies as a “data center”? For example, are ASIC based crypto mining operations counted, even though their machinery cannot be repurposed to any other function? That would certainly account for a chunk of the the US (almost all of it in Texas).





  • While truly defining pretty much any aspect of human intelligence is functionally impossible with our current understanding of the mind, we can create some very usable “good enough” working definitions for these purposes.

    At a basic level, “reasoning” would be the act of drawing logical conclusions from available data. And that’s not what these models do. They mimic reasoning, by mimicking human communication. Humans communicate (and developed a lot of specialized language with which to communicate) the process by which we reason, and so LLMs can basically replicate the appearance of reasoning by replicating the language around it.

    The way you can tell that they’re not actually reasoning is simple; their conclusions often bear no actual connection to the facts. There’s an example I linked elsewhere where the new model is asked to list states with W in their name. It does a bunch of preamble where it spells out very clearly what the requirements and process are; assemble a list of all states, then check each name for the presence of the letter W.

    And then it includes North Dakota, South Dakota, North Carolina and South Carolina in the list.

    Any human being capable of reasoning would absolutely understand that that was wrong, if they were taking the time to carefully and systematically work through the problem in that way. The AI does not, because all this apparent “thinking” is a smoke show. They’re machines built to give the appearance of intelligence, nothing more.

    When real AGI, or even something approaching it, actually becomes a thing, I will be extremely excited. But this is just snake oil being sold as medicine. You’re not required to buy into their bullshit just to prove you’re not a technophobe.