I don’t think there is a car where the seat belt is tied to anything besides a little notification beep. Seems like a different situation if the “safety” feature dictates how the car is used.
I don’t think there is a car where the seat belt is tied to anything besides a little notification beep. Seems like a different situation if the “safety” feature dictates how the car is used.
Yeah that’s right, seems my link didn’t populate right.
Do you still use WASM? I’ve been exploring the space and wasn’t sure what the best tools are for developing in that space.
Definitely sounds like it could be real. If I had to guess their mounting a drive (or another partition) and it’s defaulting to read only. When restarting it resets the original permissions as they only updated the file permissions, but not the mount configuration.
Also reads like some of my frustrations when first getting into Linux (and the issues I occasionally run into still).
This is just the estimates to train the model, so it’s not accounting for the cost to develop the system for training, collecting the data, etc. This is just pure processing cost, which is staggeringly large numbers.
I think you’re missing the point. No LLM can do math, most humans can. No LLM can learn new information, all humans can and do (maybe to varying degrees, but still).
AMD just to clarify by not able to do math. I mean that there is a lack of understanding in how numbers work where combining numbers or values outside of the training data can easily trip them up. Since it’s prediction based, exponents/tri functions/etc. will quickly produce errors when using large values.
Here’s an easy way we’re different, we can learn new things. LLMs are static models, it’s why they mention the cut off dates for learning for OpenAI models.
Another is that LLMs can’t do math. Deep Learning models are limited to their input domain. When asking an LLM to do math outside of its training data, it’s almost guaranteed to fail.
Yes, they are very impressive models, but they’re a long way from AGI.
LLMs do suck at math, if you look into it, the o1 models actually escape the LLM output and write a python function to calculate the output, I’ve been able to break their math functions by asking for functions that use math not in the standard Python library.
I know someone also wrote a wolfram integration to help solve LLMs math problems.
Not sure if you’re serious, but they were making a joke because Intel, who makes chips, is a competitor to TMSC the chip manufacturer from the article.
So they played on that relationship by treating the word Intel in your “thanks for the Intel” comment as meaning the company.
While you’re probably right, I think it’s total numbers that probably matter more for these things. Reddit could loose a number of niche communities and most users wouldn’t notice due to its size. They can also hemmorage more people and content before it becomes apparent to the average user.
All the evolution in AI right now is just trying different model designs and/or data. It’s not one model that is being continuous refined or modified. Each iteration is just a new set of static weights/numbers that defines it’s calculations.
If the models were changing/updating through experience maybe what you’re writing would make sense, but that’s not the state of AI/ML development.
It can emulate the switch, and I’ve tried it out with pretty good success, but not sure if there are any tradeoffs.
This approach has been around for a while and there are a number of applications/systems that were using the approach. The thing is that it’s not a different model, it’s just a different use case.
Its the same way OpenAI handle math, they recognize it’s asking for a math solution and actually have it produce a python solution and run it. You can’t integrate it into the model because they’re engineering solutions to make up for the models limitations.
I mean I can list a lot of things AI (and I’ll limit it to Transformers, the advancement that drives LLMs) has enabled:
AI isn’t a scam, but it’s being oversold and it’s limitations are being purposefully hidden. That being said, it is changing how things are done and that’s not going to stop. We’re still seeing impacts from CNNs, one of the major AI/ML breakthroughs from over a decade ago, make impacts.
What was so obvious in that instance was the board members trying to push him out were calling out the lack of openness OpenAI was trending towards. They were literally calling him out for not upholding the vision of why the company was founded.
All the engineers clearly saw their payday slipping away and revolted for that reason. Can’t say I blame them, but it was a scenario where the board was actually doing the right thing and everyone turned on them for profit.
Originally all their work was supposed to be published and shared with the world, hence the “open” in OpenAI. However somewhere along the way they made a for-profit break off of the original company and started pulling everything in that direction.
This is what I was going to say.
Microsoft actually ported their keyboard to android, called “Microsoft SwiftKey” or similar. It’s a great keyboard, but apparently now has copilot ಠ_ಠ
Fair point then about the arguement around safety. For me the bigger issue is control. Cars with kill switches and conditions to use is a slippery slope. Just look at what’s happened with software and media. Don’t want to have to pirate my car or load custom firmware so I can use it as I want.