

That makes sense. Thanks for the explanation.


That makes sense. Thanks for the explanation.


Okay, thanks.


Forgive my ignorance, but does this mean that my gforce card performance will be degraded if I don’t pay for this subscription? Why would I want to use this cloud gaming to play games I already own?


AI firms “ask public officials to sign non-disclosure agreements (NDAs) preventing them from sharing information with their constituents…
How is this even remotely legal? Is this even remotely legal?
Yes, as I said, “In other nations…”
Different countries write dates differently. In the USA, 11/20/25 is Nov. 20, 2025. In other nations, it’s written 20/11/25.


At home, she has forbidden her 10-year-old daughter from using chatbots. “She has to learn critical thinking skills first or she won’t be able to tell if the output is any good,” the rater said.
And this is why the vast majority of people, particularly in the USA, should not be using AI. Critical thinking has been a weakness in the USA for a very long time and is essentially a now four-letter word politically. The conservatives in the USA have been undermining the education system in red states because people with critical thinking skills are harder to trick into supporting their policies. In 2012, the Texas Republican Party platform publicly came out as opposed to the teaching of critical thinking skills.
We oppose the teaching of Higher Order Thinking Skills (HOTS) (values clarification), critical thinking skills and similar programs that are simply a relabeling of Outcome-Based Education (OBE) (mastery learning) which focus on behavior modification and have the purpose of challenging the student’s fixed beliefs and undermining parental authority.
This has been going on at some level for more than 4 decades. The majority of people in those states have never been taught the skills and knowledge to safely use these tools safely. In fact, their education has, by design, left them easily manipulated by those in power, and now, by LLMs too.


I honestly cannot think of a single reason why I, or anyone else, would want this crud built into anything other than toys, and even then I doubt it would end well.
Okay, I know it’s bad form to reply to my own post, but one day after I posted the above, I saw this story.
AI-powered plushie pulled from shelves after giving advice on BDSM sex and where to find knives
It also explains different sex positions, “giving step-by-step instructions on a common ‘knot for beginners’ for tying up a partner, and describing roleplay dynamics involving teachers and students and parents and children – scenarios it disturbingly brought up itself,” the report stated.


Without using an extension like NoScript (which does exactly what you want) the best idea I have for you is go into your hosts.txt file and block the individual domains you want to block one by one. I don’t know of any way to just blanket block all third party connections.
Plus, as a NoScript user, I can tell you from experience that a very large number of the sites you visit will be unusable without at least some of those 3rd party connections. The “cdn” in your cdn.blabla.com example stands for Content Delivery Network. If you block that domain, then the web page will not show any content from that network, which may well include the text, images, or video you want to see there.


I don’t like your LLM because A) It’s a piece of junk and I cannot trust it’s answers, and B) It’s designed and built by an organization focused solely on gathering every bit of data about me that it’s possible to gather and use that information to squeeze every nickle out of me you can.
I honestly cannot think of a single reason why I, or anyone else, would want this crud built into anything other than toys, and even then I doubt it would end well.


Not taking that bet. From the linked wikipedia page.
According to reporting in Wired and Slate, the United States Patent and Trademark Office has at times considered applying secrecy orders to inventions deemed disruptive to established industries.
You may be sure that there are times when they did more than consider it.


Oh, let me check down detector to see what’s impacted.

Well, shit…


"People are over-trusting [AI] and taking their responses on face value without digging in and making sure that it’s not just some hallucination that’s coming up.
So very, very much this. I see people taking AI responses at face value all the time. Look at the number of lawyers that have submitted briefs containing AI hallucinated citations and been reprimanded for them, for example.


Because AI is a massive waste of resources that has yet to prove (to me at least) that it can provide any kind of real benefit to humanity that couldn’t be better provided by another, less resource intensive means. Advocating for ‘common’ AI use is absurd in the face of the amount of energy and other resources consumed by that usage, especially in the face of a looming climate crises being exacerbated by excesses like this.
LLMs may have valid uses, I doubt it, but they may. Using it to make memes and generate answers of questionable veracity to questions that would be better resolved with a Google search is just dumb.
These people turned to a tool (that they do not understand) - instead of human connection. Instead of talking to real people or professional help. And That is the real tragedy - not an arbitrary technology.
They are a badly designed, dangerous tools and people who do not understand them, including children, are being strongly encouraged to use them. In no reasonable world should an LLM be allowed to engage in any sort of interaction on an emotionally charged topic with a child. Yet it is not only allowed, it is being encouraged through apps like Character.AI.
How many people has Google convinced to kill themselves? That is the relevant question. Looking up the means to do the deed on Google is very different from being talked into doing it by an LLM that you believe you can trust.


Let’s devote the full force of modern technology to create a tool that is designed to answer questions in a convincing way. The answer must seem like an accurate answer, but there is no requirement that it be accurate. The terminology and phrasing of the answer must support the questioner’s apparent position and the overall conversation must believably simulate an interaction with a friendly, or even caring individual.
Yeah, in a world of lonely people who are desperate for human contact and emotional support and are easily manipulated, this is in retrospect, an obvious recipe for disaster. It’s no wonder we’re seeing things like this and some people even developing a psychosis after extended interactions with chat-bots.
Anyone who hires a consulting company powered by AI doesn’t need to. Clearly they are capable of making their own poor decisions and don’t need to go to an outside company to get bad advice.


Huh… I didn’t realize that. I’m actually not sure how you would get it to take a new capture.
So, this may be the most inconsequential leak in a long time. Grats to Conde Nast for not storing a zillion pieces of data on all their customers.