![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.ml/pictrs/image/q98XK4sKtw.png)
Rather than creating a custom terminal app, could you create a user that only had permission to run the restricted commands, with a profile script that gets run at login and offers a menu of common tasks?
Rather than creating a custom terminal app, could you create a user that only had permission to run the restricted commands, with a profile script that gets run at login and offers a menu of common tasks?
“We will now remove content that targets ‘Zionists’ with dehumanizing comparisons, calls for harm, or denials of existence on the basis that ‘Zionist’ in those instances often appears to be a proxy for Jewish or Israeli people,” Meta said in a blog post.
So dehumanization and calls for harm are fine, as long as the target isn’t a proxy for Jews or Israelis?
I guess having a lot of unhappy customers implies that a lot of people previously purchased the product.
Michaels, 79, told Vanity Fair in an interview published Wednesday that he was initially “very skeptical” of the proposal from NBCUniversal executives — until he heard the AI-generated version of his speaking voice, which is capable of greeting viewers by name.
Was this a phone interview, by any chance?
I propose detecting atmospheric anomalies induced by their infinite improbability drives.
While the labels give retailers the ability to increase prices suddenly, Gallino doubts companies like Walmart will take advantage of the technology in that way. “To be honest, I don’t think that’s the underlying main driver of this,” Gallino said. “These are companies that tend to have a long-term relationship with their customers and I think the risk of frustrating them could be too risky, so I would be surprised if they try to do that.”
How to tell if an academic doesn’t get out enough.
An organoid is not a single cell—each one can have thousands of neurons, depending on the size.
Here’s a video that starts with a good general overview of brain organoids:
Can it do backpropagation?
this data is not the world, but discourse about the world
To be fair, the things most people talk about are things they’ve read or heard of, not their own direct personal experiences. We’ve all been putting our faith in the accuracy of this “discourse about the world”, long before LLMs came along.
Some of them pass within “a few dozen kilometers”, while others are at “a large distance” but are in orbits that could be quickly changed to put them closer.
TLDR: The purpose and capabilities of the satellites are unknown, but they’re being deployed suspiciously close to US surveillance satellites.
This subthread switched specifically to the topic of their pending lawsuits
Because Internet Archive implied a potential connection to the DDoS attack. And given the large-institution scale of the attack and the lack of motivation for any other actors on that scale, it seems like the most plausible explanation.
Edit: And I’m not sure where you’re trying to go with this whole subthread—you tried to narrow the topic exclusively to the legal case by arguing that the case is unrelated to the DDoS attack, while at the same time pointing to the lawsuit to imply that IA had it coming.
It’s an open-and-shut case and everyone saw it coming.
And yet whoever’s doing this evidently doesn’t expect to succeed via legal means.
Existing AI companies got their data long ago—but it’s in their interest to create barriers for entry to new AI companies.
Does it need to be accessible via API (e.g. SQL) or just a spreadsheet-style web interface?
Someone with their own proprietary large dataset trying to eliminate non-proprietary alternatives?
What about the usage demographics within each country?
In underdeveloped/exploited countries, internet usage is more likely to be concentrated among the economic elites who formerly benefited from colonialism—so if increasing adoption in those countries just follows the pattern of other internet use, it could have the opposite effect from the one intended.
My criticism of Neuralink’s response has nothing to do with whether or not the first patient was treated unfairly. It’s that it reveals Neuralink’s priorities: they had a choice going forward of trying to fix the first patient’s implant or giving up and starting over with a fresh patient, and they chose the latter.
In animal testing, that decision would depend on how valuable the guinea pigs are.
For those not wanting to read the article, note that they revealed (to employees) a progress framework, not any actual progress.
The framework is just a five-tiered classification of potential future AIs: Chatbots (1); Reasoners (2); Agents (3); Innovators (4); and Organizations (5). They characterize their current progress as near level 2, but there’s no indication of recent progress that would be newsworthy of its own accord.