Curved screens are often significantly harder (and more expensive, even at independent shops) to replace than standard flat screens.
Curved screens are often significantly harder (and more expensive, even at independent shops) to replace than standard flat screens.
It’s not just random jitter, it also likely adds context, including the device you’re using, other recent queries, and your relative location (like what state you’re in).
I don’t work for Google, but I am somewhat close to a major AI product, and it’s pretty much the industry standard to give some contextual info to the model in addition to your query. It’s also generally not “one model”, but a set of models run in sequence— with the LLM (think chatGPT) only employed at the end to generate a paragraph from a conclusion and evidence found by a previous model.
Relaying a key signal 20 ft when you know the key is there isn’t too tricky, like when you’re home. But I would propose that trying to relay a signal across hundreds of feet, like a busy mall or store, when you’re not even sure the owner is there is quite another thing. You can also require that the IR blaster is in the car before starting. There’s also a technology Google has been using for a while now where the device (car) would emit a constant ultrasonic signal for the other device (key) to pick up on to determine if they are close to each other. Something that could be done through clothing, but not easily relayed.
Potentially better idea, add a gyroscope to the key fob, and stop broadcasting after the fob is perfectly still for some threshold. That way when you set it down inside it can’t be relayed, but if it’s in your pocket, it won’t remain perfectly still, and will start transmitting. Could also add an IR blaster to detect if you set it down in the car. Battery life would start to become a bigger issue, but I think solutions to these problems could be engineered.
Indexing and lookups on datasets as big as companies like Google and Amazon are running also take trillions of operations to complete, especially when you take into account the constant reindexing that needs to be done. In some cases, encoding data into a neural network is actually cheaper than storing the data itself. You can see this in practice with gaussian splatting point cloud capture, where they are training networks to guide points in the cloud at runtime, rather than storing the position of trillions of points over time.
I firmly believe it will slow down significantly. My prediction for the future is that there will be a much bigger focus on a few “base” models that will be tweaked slightly for different roles, rather than “from the ground up” retraining like we see now. The industry is already starting to move in that direction.
While I agree in principle, one thing I’d like to clarify is that TRAINING is super energy intensive, once the network is trained, it’s more or less static. Actually using the network isn’t dramatically more energy than any other indexed database lookup.
Actually, Windows does allow you to use an alternate “compositor”— a feature which is used quite frequently in the industrial/embedded space. Windows calls them “custom shells”. The default is Explorer, but it can be set to any executable.
https://learn.microsoft.com/en-us/windows/iot/iot-enterprise/customize/shell-launcher
Autopilot maintains altitude and bearing between waypoints in the sky, and in some (ideal) situations can automatically land the aircraft. In terms of piloting an aircraft, it can handle the middle of the journey entirely autonomously, and even sometimes the end (landing).
Autopilot (the Telsa feature) is not rated to drive the car autonomously, requires constant human supervision, and can automatically disengage at any time. Despite being sold as an “autonomous driver”, it cannot function as one, like autopilot on a plane can. It is clearly using the autopilot feature of an aircraft to imply that the car can pilot itself through at least the middle of the journey without direct supervision (which it can’t). That is misrepresentation.
Investigate it? The dude literally named it “autopilot”, what is there to investigate, they market this explicitly in their advertising.
I could 100% see them offering user replaceable memory, but with a slower max speed than factory installed. Gotta have something to point to when the regulators come a-knockin.
They still exist at some companies, they’re just a lot less common than they used to be.
proceeds to explicitly name 10 different biases back to back, requiring that the agent adheres to them
“We just want an unbiased AI guys!”
Apple TV seems to be doing just fine. Considering investing in one. I don’t think it’s likely they’ll start putting ads in the hone screen and such.
There absolutely is though. Implement a dispute process that loops in an actual human once a detection is triggered. Will that cost a lot of money, and require a lot of people? Yea. But that’s just the cost of doing business at the scale of a company lime Google, it (should be) their duty.
Can’t do it silently, but it’s not uncommon for root certs to come along with a VPN. I wouldn’t be surprised to see that it’s built into the VPN profile API on Andriod and Apple devices.
The VPN adds its own root certs to the device, and just terminates TLS at the gateway, then establishes a second TLS tunnel to the device.
Because of Google’s limitations (and rightfully so, on Google’s part to limit it). Any app can call the weather API on Android if they want, assuming the user has given them permission, and the app is running. Since Apple doesn’t control the Andriod OS, they have to conform to the rules of any other app on the app store, meaning they are subject to the OS stopping their watch companion app if the user never opens it (and honestly as a daily smartwatch user, no one ever opens the companion app after setup). On iOS, Apple can waive these limitations for select Apps, like their watch app, but on Andriod, they have no such ability.
In short, they can call the weather API, but it’ll only work if the app is running, which by all accounts on Andriod, as a third party app, it shouldn’t be.
EDIT: I’ll also note that having used both platforms (WatchOS and Andriod Watch), Google and Apple have taken fundamentally different approaches to their watches, in a way that makes cross compatibility difficult.
Andriod Watches are fundamentally, an extension of the phone they are running alongside, they primarily exist to give you notifications, and the vast majority of apps are calling home to the phone to have their companion app perform tasks.
Apple Watches on the other hand are fundamentally “their own device”. They receive notifications just like Andriod watches do, but they can also function entirely on their own, they even have cell radios built in. They are essentially, their own small phone, only using the companion phone as a data connection, if you don’t have a sim for the built in radio. This is not really something Andriod is designed to accommodate. They only work well on iOS because Apple can exempt them from a lot of the restrictions third party apps have, they are built in from the OS level, where Andriod watches just aren’t.
Sure, but a TV ad takes (at the least) and editor, or (at the most) a cast and crew. They take by money and time to create, and loop average working people into the process. Of course there will be people in any profession that will make whatever they’re paid to, but by and large, most of the acting/editing industry has some form of ethics.
People debunking false claims takes time too, but since creating them take time as well, things have a chance to balance out (obviously that’s happening less, but there’s still a chance for it to happen). But if an AI model can pump out fake history autonomously, almost instantly, and without any chance for a human with ethics to intervene in the process, debunking/fighting misinformation becomes WAY harder. Because you’re not fighting a person with limited time and resources anymore, you’re fighting a firehose of false content that will bury you without even breaking a sweat.
I van totally believe that it detects AI generated content 99% of the time, that’s trivial. What I really wanna know is the false positive rate. If I write a program that flags everything, it’d have a 100% hit rate. It’d also however have a crazy high false positive rate.