

This was sold by Foveon, which had some interesting differences. The sensors were layered which, among other things, meant that the optical effect of moire patterns didn’t occur on them.


This was sold by Foveon, which had some interesting differences. The sensors were layered which, among other things, meant that the optical effect of moire patterns didn’t occur on them.


This doesn’t appear to be made by the people from either the Raspberry Pi Foundation or Raspberry Pi Holdings.


Bad setup isn’t a reason why something is a bad idea. Whatever your opinions of cars are, talking about how bad they would be if everyone drove drunk doesn’t really prove your point.
In any security system, and this should also apply to home automation, one of the things you have to account for is failure. If you don’t have a graceful failure mode, you will have a bad time. And context matters. If my security system fails at home, defaulting to unlocked doors makes sense, especially if it’s due to an emergency like a fire. If the security system in a virology lab fails, you probably don’t want all the doors unlocked, and you may decide to have none of the doors unlocked, because the consequences of having the doors unlocked is greater than having them locked. Likewise, but of a much less serious nature, if your home automation fails, you should have some way of controlling the lights. If you don’t, again, it hasn’t failed gracefully.


You’re still not getting it. A proper smart home will know when you want certain things. You’re going into the bathroom to get ready for work, the lights are programmed for full intensity. In the middle of your sleep period, they go to the pre-programmed dim mode. And most rooms will be used in certain ways, as defined by you. If you’re in the living room and turn the TV on the lights dim, because that’s what you told it to do. You have an EV to charge, it knows how much time your EV needs to charge and how much electricity costs you during certain periods. So you plug the car in and it charges it when you want it to so you are ready when it’s time to go to work. This is where smart homes start to shine - they do all the usual things you would do if they weren’t so complicated and all the default things you would normally do, and you just live your life and deal with the exceptions as needed. If you use a room 3 different ways, you set up those 3 different ways and make the typical one your default. Now you’re back to exceptions. And the more rules you have to how you do things, the better it works for you. And most people have a preferred way they want things, modified by how much it takes to get there and other circumstances. With the right sensors, timers, etc., most of those can be accounted for.
So maybe you start with lights turning on when you enter the room, but if you do it right you get to the point where you barely think about lights at all - they’re just how you want them to be. Why would you not want that? However little effort lights take to manage, why do you want them to take any effort at all? And there are many more things than lights, some of which just make life easier, or more comfortable, or cheaper, all of which are good reasons to want this.


If they ramp up production and the bottom falls out of AI, they could be left with large product reserves, and people may still be reluctant to buy. One way to increase demand is to lower prices. Now, if they are the only company in this position, things may not change much. But if more than one are, the other can supply the market at a price that’s acceptable to them and the consumers.
Or those companies can collude and just completely fuck over customers. But that would never happen, right?


There was a story about a researcher using evolving algorithms to build more efficient systems on FPGAs. One of the weird shortcuts was some system that normally used a clock circuit, but none was available, and it made a dead-end circuit the would give a electric pulse when used, giving it a makeshift clock circuit. The big problem was that better efficiency often used quirks of the specific board, and his next step was to start testing the results on multiple FPGAs and using the overall fitness to get past that quirk/shortcut.
Pretty sure this was before 2010. Found a possible link from 2001.


Summarize that sentence into a thumbs up or thumbs down emoji.


A single point of data rarely answers the question unless you’re looking for absolutes. “Will zipping 10 files individually be smaller than zipping them into a single file?” Sure, easy enough to do it once. Now, what kind of data are we talking about? How big, and how random, is the data in those files? Does it get better with more files, or is the a sweet spot where it’s better, but it’s worse if you use too few files, or too many? I don’t think you could test for those scenarios very quickly, and they all fall under the original question. OTOH, someone who has studied the subject could probably give you an answer easily enough in just a few minutes. Or he could have tried a web search and find the answer, which pretty much comes down to, “It depends which compression system you use.”


Pretty much everything you said is incorrect, except for the article age. Valetudo literally wrote software that does this on multiple models locally, including mapping. The response of the manufacturers whose models were capable of this was to release new versions where this wasn’t an option. As for servers and local control, there are a number of solutions for those with the knowledge and hardware to set it up, and the only thing stopping robovac companies from supporting this is (less) money.


We could still live in caves, but most of us have chosen not to. I’m personally of the opinion that every advancement that gives you more time to do things that are important to you are worth it. This doesn’t mean inviting every piece of spyware some company tries to thrust upon me is acceptable, either.
This reminds me of a saying I heard. In America a hundred years is old. In Europe a hundred miles is far.