That’s what they’ve been trying to do, just not in the way you want it
Sorry, but the AI is just as “biased” as its training data is. You cannot have something with a consistent representation of reality that they would consider unbiased.
He means they must insert ideological bias on his behalf.
Not necessarily, they train models on real world data, often of what people believe to be true, not what works, and those models are not yet able to perform experiments, register results and learn from them (what even a child does, even a dumb one), and real world is cruel, bigotry is not even the worst part of it, neither are anti-scientific beliefs. But unlike these models, the real world has more entropy.
If you’ve seen Babylon V, the philosophy difference between Vorlons and Shadows was somewhere near this.
One can say in philosophy blockchain is a Vorlon technology and LLMs are a Shadow technology (it’s funny, because technically it would be the other way around, one is kinda grassroots and the other is done by few groups with humongous amounts of data and computing resources), but ultimately they are both attempts to compensate what they see as wrong in the real world. Introducing new wrongs in their blind zones.
(In some sense the reversal of alignment of Vorlons and Shadows, between philosophy and implementation, is right - you hide in technical traits of your tooling that which you can’t keep in your philosophy ; so “you’ll think what we tell you to think” works for Vorlons (or Democrats), but Republicans have to hide that inside tooling and mechanisms they prefer, while “power makes power” is something Democrats can’t just say, but can hide inside tooling they prefer or at least don’t fight too much. That’s why cryptocurrencies’ popularity came in one side’s ideological dominance time, and “AIs” in the others’. Maybe this is a word salad.)
So, what I meant, - the degeneracy of such tools is the bias in his favor, there’s no need for anything else.
I can’t believe you worked a B5 ref into a discussion, much less operational differences between Vorlon and Shadow.
Major difference even in the analogy is that Shadows actively and destructively sought control and withheld info whereas Vorlons manipulated by parceling out cryptic messages.
Anyway, yeah… the internet is completely fucked up and full of stupidity, malice, and casual cruelty. Many of us filter it by simply avoiding it by chance (it’s not what we look for) or actively filter it (blocking communities, sites, media, etc.), so we don’t see the shitholes of the internet and the hordes of trolls and wingnuts that are the denizens of these spaces.
Removing filters from LLMs and training them on shitholes will have the expected result.
While I do prefer absolute free speech for individuals, I have no illusions about what Trump is saying behind closed doors: “Make it like me, and everything that I do.” I don’t want an government to decide for me and others what is right.
Also, science, at least the peer reviewed stuff, should be considered free of bias. Real world mechanics, be it physics or biology, can’t be considered biased. We need science, because it makes life better. A false science, such as phrenology or RFK’s la-la-land ravings, needs to be discarded because it doesn’t help anyone. Not even the believers.
Science is not about government, or right and left, or free speech. It’s just science. It’s about individuals spending their lives studying a specific subject. Politicians who know nothing about those subjects should have no say. I shudder to think what might have happened during the polio outbreak under today’s U.S politicians.
Edit: In support of your comment.
Politicians who know nothing about those subjects should have no say.
Some ethical guidelines are very important though. We usually don’t want to conduct potentially deadly experiments on humans for example.
Reality has a liberal bias.
Historically liberals have always been right and eventually won.
Got rid of slavery. Got women’s rights. Got Gay rights. Etc.
I wish more people realised this. Well said comrade.
Le Chat by Mistral is a France-based (and EU abiding) alternative to ChatGPT. Works fine for me so far.
I’m switching to DeepSeek-R1, personally. locally hosted, so I won’t be affected when the US bans it. plus I can remove the CCP’s political sensitivity filters.
it feels weird for me to be rooting for PRC to pull ahead of the US on AI, but the idea of Trump and Musk getting their hands on a potential superintelligence down the line is terrifying.
I get where you’re coming from. I’m no fan of China and they’re definitely fascist in my book, but if I had to choose between China and this America, then definitely China. The reason being that a successful fascist America will add even more suffering to the world than there already is. Still, I would prefer an option from a democratic country succeeds — although if we’re talking strictly local use of Chinese (or even US) tech, I don’t really see how that helps the country itself. To the high seas, as they say.
but if I had to choose between China and this America, then definitely China.
Suppose they are equally powerful, which one would you choose then?
I suppose it wouldn’t matter at that point? I’m not sure what you mean exactly. There’s a lot of instability in America right now as it tries to become fully fascist, and I think the world (to any Americans reading this — this includes you too!) has to decide whether they’re fine with it or not, which will in turn affect its success in becoming fully fascist. Anything done to make it harder for the transformation to complete could turn the tide, since they’re more vulnerable while things are in motion. Once it’s done and that becomes the norm, it’s going to become much more difficult.
I’ve been an enthusiastic adopter of Generative AI in my coding work; and know that Claude 3.7 is the greatest coding model out there right now (at least for my niche).
That said, at some point you have to choose principles over convenience; so I’ve cancelled all my US Tech service accounts - now exclusively using ‘Le Chat Pro’ (+ sometimes local LLM’s).
Honestly, it’s not quite as good, but it’s not half bad either, and it is very very fast thanks to some nifty hardware acceleration that the others lack.
I still get my work done, and sleep better at night.
The more subscriptions Mistral get, the more they’re able to compete with the US offerings.
Anyone can do this.
The more subscriptions Mistral get, the more they’re able to compete with the US offerings.
That’s true. I’m still on free. How much for the Pro?
$14 USD/mo… Ironically
Personally, I find that for (local AI), the recently released 111b Command-A is pretty good. It actually grasps the concepts of the dice odds that I set up for a D&D-esque JRPG style. Still too slow on mere gamer hardware (DDR4 128gb + RX 4090) to be practical, but still an impressive improvement.
Sadly, Cohere is located in the US. On the other paw, they operate in California and New York from my brief check. This is good, that means it less likely for them to obey Trump’s stupidity.
Oh yeah, local is a different story. I’d probably look into something like what you mentioned if I had the hardware, but atm I’m more interested in finding 1-1 alternatives to these tech behemoths, ones that anyone can use with the same level of convenience.
eliminates mention of “AI safety”
AI datasets tend to have a white bias. White people are over-represented in photographs, for instance. If one trains AI to with such datasets in something like facial recognition( with mostly white faces), it will be less likely to identify non-white people as human. Combine this with self-driving cars and you have a recipe for disaster; since AI is bad at detecting non-white people, it is less likely to prevent them from being crushed underneath in an accident. This both stupid and evil. You cannot always account for any unconscious bias in datasets.
“reducing ideological bias, to enable human flourishing and economic competitiveness.”
They will fill it with capitalist Red Scare propaganda.
The new agreement removes mention of developing tools “for authenticating content and tracking its provenance” as well as “labeling synthetic content,” signaling less interest in tracking misinformation and deep fakes.
Interesting.
“The AI future is not going to be won by hand-wringing about safety,” Vance told attendees from around the world.
That was done before. A chatbot named Tay was released into the wilds of twitter in 2016 without much ‘hand-wringing about safety’. It turned into a neo-Nazi, which, I suppose is just what Edolf Musk wants.
The researcher who warned that the change in focus could make AI more unfair and unsafe also alleges that many AI researchers have cozied up to Republicans and their backers in an effort to still have a seat at the table when it comes to discussing AI safety. “I hope they start realizing that these people and their corporate backers are face-eating leopards who only care about power,” the researcher says.
They will fill it with capitalist Red Scare propaganda.
I feel as if “capitalist” vs “Red” has long stopped being a relevant conflict in the real world.
Yeah but the current administration wants Tay to be the press secretary
capitalist Red Scare propaganda
I’ve always found it interesting that the US is preoccupied with fighting communism propaganda but not pro-Fascist propaganda.
tl;dr: 1946 Department of Defense film called “Don’t Be a Sucker” that “dramatizes the destructive effects of racial and religious prejudice” and the dangers of fascism pretty blatantly.
Communism threatens capital. Fascism mostly does not.
So it’s never been about democracy after all.
Autonomous vehicles don’t use facial recognition datasets to detect people
Literally 1984.
This is a textbook example of newspeak / doublethink, exactly how they use the word “corruption” to mean different things based on who it’s being applied to.
Doublethink, but yeah you’re right
Newspeak and doublethink at the same time, ackshually, but I think everybody gets what you both mean.
Corrected !
I only read 1984 once in French 11 years ago, so it’s a little fuzzy 😆
It’s like 1984 at hyper speed
AI is not your friend.
Well the rest of the world can take the lead in scientific r&d now that the US has not only declared itself failed culturally but politically and are attacking scientific institutions and funding directly (NIH, universities, etc).
Trump doing this shit reminds me of when the Germans demanded all research on physics, relativity, and thankfully the atomic bomb, stop because they were “Jewish Pseudoscience” in Hitler’s eyes
trump also complimented thier nazis recently, how he wish he had his “generals”
Considering they thought he was crazy and refused his orders, I kinda wish he had them to.
It is good(?) that he released capable workers from federal service…so that they can serve someplace more democratic. The more that Yarvin’s Cabal undercut their own competency and reinforces the good guys, the better it is for the free world.
So, models may only be trained on sufficiently bigoted data sets?
This is why Musk wants to buy OpenAI. He wants biased answers, skewed towards capitalism and authoritarianism, presented as being “scientifically unbiased”. I had a long convo with ChatGPT about rules to limit CEO pay. If Musk had his way I’m sure the model would insist, “This is a very atypical and harmful line of thinking. Limiting CEO pay limits their potential and by extension the earnings of the company. No earnings means no employees.”
Same reason they hate wikipedia.
Didn’t the AI that Musk currently owns say there was like an 86% chance Trump was a Russian asset? You’d think the guy would be smart enough to try to train the one he has access to and see if it’s possible before investing another $200 billion in something. But then again, who would even finance that for him now? He’d have to find a really dumb bank or a foreign entity that would fund it to help destroy the U.S.
How did your last venture go? Well the thing I bought is worth about 20% of what I bought it for… Oh uh… Yeah not sure we want to invest in that.
Assuming he didn’t expressly buy twitter to dismantle it as a credible outlet for whistleblowers while also crowding out leftist voices.
He was force to buy twitter…
And the Saudis were forced to give him money for it, right?
Is that why the Saudis bought in? Or was that just so they could help expand their sports tournaments on Trump’s properties?
Probably a bit of both. The Sauds want post oil influince and oligarchs like seeing the poors focused on entertainment.
it’s that he likes chatgpt better than grok. he’ll still tweak chatgpt once he has access to it to make it worse, but at the core of what he wants is to own chatgpt and rename it grok
Or somehow fuse the two together, because Neuromancer…
i’m sure in his addled brain, this is one of the possible plans
Yes, as is already happening with police crime prediction AI. In goes data that says there is more violence in black areas, so they have a reason to police those areas more, tension rises and more violence happens. In the end it’s an advanced excuse to harass the people there.
Lmfao yeah, right bud. Totally how that works. More police = more crime, because… ‘tensions’.
This sanctimonius bullshit excuse making is why a 100% objective AI model would destroy leftism: it’s not rooted in reality.
Don’t you think science have a “globe” bias, or an “evolution” bias? Maybe even a “germ theory” bias?
The American police were invented to capture black folks and to guard the elite’s interests, not to safeguard the things that make civilization worth having.
I don’t think it’s more crime because more tension. It’s instead a self fulfilling prophecy. Who do you think detects and records crime if not the police? Therefore more police in a area increases the number of crime data points in that area.
reality has a left bias, becausr it the vetter option - the bias comes from the fact many people are not ass wholes.
Any meaningful suppression or removal of ideological bias is an ideological bias.
I propose a necessary precursor to the development of artificial intelligence is the discovery and identification of a natural instance.
A natural instance is ideologically biased aswell.
Grok is still woke!!!
Yup, and always will be, because the antiwoke worldview is so delusional that it calls empirical reality “woke”. Thus, an AI that responds truthfully will always be woke.
I hope this backfires. Research shows there’s a white & anti-blackness (and white-supremacist) bias in many AI models (see chatgpt’s response to israeli vs palestinian questions).
An unbiased model would be much more pro-palestine and pro-blm
‘We don’t want bias’ is code for ‘make it biased in favor of me.’
That’s what I call understanding Trumpese
“Sir, that’s impossible.”
“JUST DO IT!”