OpenAI was working on advanced model so powerful it alarmed staff::Reports say new model Q* fuelled safety fears, with workers airing their concerns to the board before CEO Sam Altman’s sacking
OpenAI was working on advanced model so powerful it alarmed staff::Reports say new model Q* fuelled safety fears, with workers airing their concerns to the board before CEO Sam Altman’s sacking
There’s a huge discrepency between the scary warnings about Q* calling it the lead-up to artificial superintelligence, and the actual discussion of the capabilities of Q* (it is good-enough at logic to solve some math problems).
My theory: the actual capabilities of Q* are perfectly nice and useful and unfrightening… but somebody pointed out the obvious: Q* can write code.
Either
“Q* is gonna take my job!”
“As we enhance Q*, it’s going to get better at writing code… and we’ll use Q* to write our AI code. This thing might not be our hypothetical digital God, but it might make it.”
Did they really have to name it Q?
It’s going to create a super intelligent AI that’s more irritating than anything else.
It’s apparently Q* (pronounced Q star).
It’s possible it’s related to the Q* function from Q-learning, a strategy used in deep reinforcement learning!
… or this is the origin of the Q and we’re all fucked. I find my hypothesis much more plausible.
plausible: check
testable: TBD
falsifiable: TBD
still, 1 out of 3. not bad!
At least they didn’t name it AM?
Nah. Programming is… really hard to automate, and machine learning more so. The actual programming for it is pretty straightforward, but to make anything useful you need to get training data, clean it, and design a structure, which is much too general for an LLM.
Programming is like 10% writing code and 90% managing client expectations in my small experience.
But a lot of the crap you have to do only exists because projects are large enough to require multiple separate teams, so you get all the overhead of communication between the teams, etc.
If the task gets simple enough that a single person can manage it, a lot of the coordination overhead will disappear too.
In the end though, people may find out that the entire product, that they are trying to develop using automation, is no longer relevant anyway.
Programming is 10% writing code, 80% being up at 3 in the morning wondering whY THE FUCKING CODE WON’T RUN CORRECTLY (it was a typo that you missed despite looking at it over 10 times), and 10% managing expectations
Typos in programming aren’t really a thing, unless you’re using the shittiest tools possible.
Typos are very much a problem in programming. Variables can be set to the wrong value without the programmer noticing, you can call the wrong method (example RotateZ instead of RotateX), and in more advanced programming such as Java/C# reflection the IDE can’t correct you.