• 0 Posts
  • 394 Comments
Joined 1 year ago
cake
Cake day: August 3rd, 2023

help-circle


  • Isn’t yelp a pretty easily replaceable thing?

    They built a reputation by being one of the first in the space, but they’ve squandered that reputation and I’m pretty sure someone else could start up a competing “reviews” product.

    I’d like to have one that actually showed the history of things like restaurants, because if the head chef leaves and the reviews have gone to shit it turns out that the reviews since the new chef are much more relevant than the 1000+ 5 star reviews of the food of the old guy, and that isn’t discoverable anywhere on yelp or anything like yelp.

    I’m not sure how you’d protect against enshittification long-term. But I think one of the things that has largely poisoned the spirit of the Internet in general is that everything is always about a “sustainable business model” and “scaling” before anyone even dreams of just writing something up and seeing if they can get it to go popular.


  • I get Dreamweaver vibes from AI generated code.

    Same. AI seems like yet another attempt at RAD just like MS Access, Visual Basic, Dreamweaver, and even to some extent Salesforce, or ServiceNow. There are so many technologies that champion this…RoR, Django, Spring Boot…the list is basically forever.

    To an extent, it’s more general purpose than those because it can be used with multiple languages or toolkits, but I find it not at all surprising that the first usage of gen AI in my company was to push out “POCs” (the vast majority of which never amounted to anything).

    The same gravity applies to this tool as everything else in software…which is that prototyping is easy…integration is hard (unless the organization is well structured, which, well, almost none of them are), and software executives tend to confuse a POC with production code and want to push it out immediately, only to find out that it’s a Potemkin village underneath as they sometimes (or even often) were told the entire time.

    So much of the software industry is “JUST GET THIS DONE FASTER DAMMIT!” from middle managers who still seem (despite decades of screaming this) to have developed no widespread means of determining either what they want to get done, or what it would take to get it done faster.

    What we have been dealing with the entire time is people that hate to be dependent upon coders or other “nerds”, but need them in order to create products to accomplish their business objectives.

    Middle managers still think creating software is algorithmic nerd shit that could be automated…solving the same problems over and over again. It’s largely been my experience that despite even Computer Science programs giving it that image, that the reality is modern coding is more akin to being a millwright. Most of the repetitive, algorithmic nerd shit was settled long ago and can be imported via modules. Imported modules are analogous to parts, and your job is to build or maintain the actual machine that produces the outcomes that are desired, making connecting parts to get the various components to interoperate as needed, repairing failing components, or spotting the shoddy welding between them that is making the current machine fail.




  • Lol, it couldn’t determine the right amount of letters in the word strawberry using its training before. I’m not criticizing the training data. I’m criticizing a tool and its output.

    It’s amusing to me that at first it’s “don’t blame the tool when it’s misused” and now it’s “the tool is smarter than any individual dev”. So which is it? Is it impossible to misuse this tool because it’s standing atop the shoulders of giants? Or is it something that has to be used with care and discretion and whose bad outputs can be blamed upon the individual coders who use it poorly?






  • They rather zone out and mindlessly click, copy/paste, etc. I’d rather analyze and break down the problem so I can solve it once and then move onto something more interesting to solve.

    From what I’ve seen of AI code in my time using it, it often is an advanced form of copying and pasting. It frequently takes problems that could be better solved more efficiently with fewer lines of code or by generalizing the problem and does the (IMO evil) work of making the solution that used to require the most drudgery easy.


  • It’s not about it being counterproductive. It’s about correctness. If a tool produces a million lines of pure compilable gibberish unrelated to what you’re trying to do, from a pure lines of code perspective, that’d be a productive tool. But software development is more complicated than writing the most lines.

    Now, I’m not saying that AI tools produce pure compilable gibberish, but they don’t reliably produce correct code either. So, they fall somewhere in the middle, and similarly to “driver assistance” technologies that half automate things but require constant supervision, it’s quite possible that the middle is the worst area for a tool to fall into.

    Everywhere around AI tools there are asterisks about it not always producing correct results. The developer using the tool is ultimately responsible for the output of their own commits, but the tool itself shares in the blame because of its unreliable nature.


  • Some tools deserve blame. In the case of this, you’re supposed to use it to automate away certain things but that automation isn’t really reliable. If it has to be babysat to the extent that I certainly would argue that it does, then it deserves some blame for being a crappy tool.

    If, for instance, getter and setter generating or refactor tools in IDEs routinely screwed up in the same ways, people would say that the tools were broken and that people shouldn’t use them. I don’t get how this is different just because of “AI”.


  • LLMs that can spot tumors better than humans can

    Are they though? LLMs specifically? Seems like a very strange use case for an LLM.

    But yeah we’re mostly in accordance, I wanted to riff a little bit because as a long-time tech worker I actually do have some bones to pick with the tech itself. The in-exactitude of its output and the “let the prompter beware” approach to dealing with its obvious inadequacies pisses me off and it seems like the perfect product for the current “test in production” “MVP (minimally viable product)” “pre-order the incomplete version” state software is in generally. The marketing and finance assholes are nearly fully running the show at this point and it’s evident.

    I think the usefulness of this particular technology (LLMs) is very overblown and I found its very early usages more harmful than helpful (i.e. autocorrect/autocomplete is wrong for me more often than it is right). It has decent applicability in some areas (machine translation for instance is pretty good), but the marketing department got hold of it and so now everything is AI this and AI that.

    I think it’s basically just another over-hyped technology that will eventually shake out to be used only where it is useful enough to justify its cost. If the company has to show profits at any point it is either going to go the surveillance capitalism ad route, or it’ll have to increasingly charge more per query than the gibberish it generates is really worth. I don’t see most people paying for ChatGPT long-term so they’ll probably have to enshittify further beyond their current (already kind of shitty) state.


  • Ultimately, the structure of the modern corporation was allowed to take on a lot more complexity due to the advent of computers. So, we have fewer roles where people do full-time work managing inboxes or whatever (though not zero, because that is essentially what my wife still does for work), but more roles have an “inbox management” or other secretarial component to them now.

    In practically every job, it became the case that you’re also a part-time secretary. Assistants became mainly a luxury reserved for fat cats, and the rest of us plebs are buried in emails.


  • The issue people have with AI isn’t the tech.

    I have multiple issues with the tech:

    1. It’s based upon a giant theft and mass violation of copyright laws as well as the licenses of lots of open source software.

    2. It’s ClippyGPT and much of the output is either hallucinations or trite non-sense that sounds like it was cooked up in the most bureaucratic weak-willed corporate boardroom.

    3. Its massive energy footprint to inefficiently solve math equations (for instance) is completely and thoroughly ridiculous.

    4. I don’t want to type bullshit into a chat bot in order to look something up…this is a step below even the absurd modern substitute for documentation of “go watch this 2 hour YouTube video on my development framework”.

    5. “Miniature model” and “fine-tuned model” results could have been much more easily achieved by just having functional site / domain search engines.

    Further about the last point, I feel like the open source part of the industry chased Google until it got to Lucene and then decided that an open source altavista was completely fine and dandy and stopped pursuing the goal of making their own search engines functional. So people had to continue to use Google until now and when Google has enshittified into a crappy, worse AI model for search now all we have left are chat bots that are maybe slightly better than altavista, but frequently spout out inaccurate information that they guess would exist.