

AI doesn’t generate its own code, humans using AI generate code. If a person uses AI to generate code and doesn’t know good practices then of course the code is going to be worse.


AI doesn’t generate its own code, humans using AI generate code. If a person uses AI to generate code and doesn’t know good practices then of course the code is going to be worse.


That’s not how this works at all. The people training these models are fully aware of bad data. There are entire careers dedicated to preserving high quality data. GPT-4 is terrible compared to something like Gemini 3 Pro or Claude Opus 4.5.


I needed to make a small change and realized I wasn’t confident I could do it.
Wouldn’t the point be to use AI to make the change, if you’re trying to do it 100% with AI? Who is really saying 100% AI adoption is a good idea though? All I hear about from everyone is how it’s not a good idea, just like this post.


deleted by creator


I’ve started using raw JavaScript in all my projects. There’s something about not relying on any libraries that makes it feel so much more powerful. The best AI models are very good at writing raw JavaScript, meaning you can easily create your own single page application structure. All I want are single page web applications using pure HTML, CSS, and JavaScript.


Yea the data center stuff is an absolute bubble, and models are becoming significantly cheaper rapidly. We won’t need all these data centers because reasonably usable AI models will be so efficient you can run them on a powerful gaming computer. Not everyone needs the absolute most powerful smartest model like Gemini 3.


True, if you ask about the 1989 Tiananmen Square massacre and other things typically censored by the Chinese government they won’t answer. But interestingly, there are variants that have been uncensored, and those models will answer openly if you use something like together.ai versus kimi.com. Because they are open source, they are easily uncensored.


We aren’t going to stop AI usage, but what we can do is educate people on how to use the most efficient models. Both DeepSeek v3.20-Exp and Kimi K2 Thinking are significantly more efficient than Claude/Gemini/ChatGPT.


Rapid enshittification, people will just use on of the others on the long list of alternatives. I currently recommend either DeepSeek v3.2-Exp or Kimi K2 Thinking.


What’s your knowledge regarding LLMs, if any at all?


Yes, as far as scalability, cheaper more efficient models can be used in applications which require thousands of uses a day.


This is peak bubble type news. AI is becoming rapidly more energy efficient. These events will be looked back on like pets.com reaching hundreds of millions and then dying.


We need a New New Deal to get anywhere near that. First we’re going to experience mass unemployment for about a year. Then after people have been severely pissed off with no opportunities for a couple months, then things will actually start changing.


Step 1, delete Windows 11. Step 2, install Linux Mint.


Advertised to do what? Nothing? Seems like crypto is working then yea.


You know absolutely nothing about blockchain technology or LLMs if you actually believe that.


The difference between AI and Bitcoin is that people actually use AI to do productive things.


If you ever needed a solid fact to prove this is a massive bubble, here it is.
This is the kind of weird stuff that happens near the peak of a bubble.