• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle



  • Your Gemini is way funnier in my opinion. I think he actually might have set up a trap for himself by asking it to produce what the LLM would consider a typical or average reply. Whereas by asking it to just make a short, funny comment, you’re actually getting results that feel more natural.

    For Gemini, only the first and last one read weird to me. But I think I would just assume that I’m missing some context to get the jokes, or something.

    Whereas the actual replies from the OP actually reek of standard LLM drivel. The way it is trying so hard to sound casual and cool, but coming across as super awkward is just classic GPT.



  • You should note that this was a Gmail feature that is now made available by a bunch of email providers, but you might wanna check that you do indeed get your emails delivered to plus addresses before you rush out to change your contact info everywhere. Some providers have lacking support and sometimes emails may fail to send to plus addresses even if your side does support it. Using a catchall will always work because you know, that’s just how email works.


  • It is definitely the exact opposite of this. Even though I understand why you would think this.

    The thing with systems like these is they are mission critical, which is usually defined as failure = loss of life or significant monetary loss (like, tens of millions of dollars).

    Mission critical software is not unit tested at all. It is proven. What you do is you take the code line by line, and you prove what each line does, how it does it, and you document each possible outcome.

    Mission critical software is ridiculously expensive to develop for this exact reason. And upgrading to deploy on different systems means you’ll be running things in a new environment, which introduces a ton of unknown factors. What happens, on a line by line basis, when you run this code on a faster processor? Does this chip process the commands in a slightly different order because they use a slightly different algorithm? You don’t know until you take the new hardware, the new software, and the code, then go through the lengthy process of proving it again, until you can document that you’ve proven that this will not result in any unusual train behavior.






  • Alan Wake is pretty much the definition of a modern AA game, though, so that just plays into what he’s saying.

    While Alan Wake 2 is super well executed, its development costs is dwarfed by modern triple A games that cost at least 10 times more to develop.

    (Alan Wake 2 reported budget 50-70 million euro, compared to games like Assassin’s Creed Valhalla, Red Dead Redemption 2 or Cyberpunk 2077 which were all reported at ~500 million euro, while games like MW3 (2023) and GTA VI both have billion dollar+ budgets).







  • No, especially because it’s not the same thing at all. You’re talking about the output, we’re talking about the input.

    The training data was illegally obtained. That’s all that matters here. They can train it on fart jokes or Trump propaganda, it doesn’t really matter, as long as the Trump propaganda in question was legally obtained by whoever trained the model.

    Whether we should then allow chatbots to generate harmful content, and how we will regulate that by limiting acceptable training data, is a much more complex issue that can be discussed separately. To address your specific example, it would make the most sense that the chatbot is guided towards a viewpoint that aligns with its intended userbase. This just means that certain chatbots might be more or less willing to discuss certain topics. In the same way that an AI for children probably shouldn’t be able to discuss certain topics, a chatbot that’s made for use in highly religious area, where homosexuality is very taboo, would most likely not be willing to discuss gay marriage at all, rather than being made intentionally homophobic.