Another reason right to repair is needed
Another reason right to repair is needed
If you are at the point where you are having to worry about government or corporate entities setting traps at the local library? You… kind of already lost.
What about just a blackmailer assuming anyone booting an OS from a public computer has something to hide? And then they have write access and there’s no defense, and it doesn’t have to be everywhere because people seeking privacy this way will have to be picking new locations each time. An attack like that wouldn’t have to be targeted at a particular person.
Isn’t it risky plugging usb drives into untrusted machines?
IIRC it spammed websites with traffic, didn’t conceal your IP at all, and some people got arrested for using it to make some websites go down for a very brief period. Basically a way to use people who didn’t know what they were doing as cannon fodder
Could you elaborate? Does HOA mean something different in other countries?
Home owner’s association; when you buy a house and it is part of a HOA, you have to sign a contract to join the HOA as a requirement of buying, which means you have to pay dues and abide by the rules of the organization, and you have to require the next buyer to also join in order to sell your house.
I doubt the school administrators who would be buying this thing or the people trying to make money off it have really thought that far ahead or care whether or not it does that, but it would definitely be one of its main effects.
I’ve never used one, why don’t they just like, have a camera to scan a qr code of your crypto wallet on your phone, and send it to that address directly? Anyway I don’t think it can be worse than having to take a picture that includes your face, handwritten message, id, and have to retake it 20 times because the exchange won’t accept it if it is slightly blurry, plus linking a bank account etc., needing to copy a private key and send another transaction seems like it would be way less annoying and creepy, even if the tradeoff of crazy high fees makes it not worth it for most people.
It’s kind of a pain to go through the process of signing up for a crypto exchange, so for some people it’s probably a more convenient and less intrusive way to acquire it.
I don’t have an issue with reasonable moderation, but I object to the idea that every pattern of moderation should just be accepted and that censorship isn’t a problem worth worrying about.
Reddit doesn’t have a modlog, so most of the removed comments are lost forever and there is no accountability for them, but a few of them can be seen through Reveddit, and the ones I see are not off topic or ideological rants. For instance the first one I see is
Are they going to shoot up the wrong car with innocent ladies in it again looking for this guy? Edit: Guess they managed to take him down without hitting any civilians, I guess good job for only killing the bad guy
Obviously referring to the Chris Dorner shootings which would be very relevant here, in a very reasonable way. I think it’s fair to assume that r/news moderators simply don’t want that guy mentioned at all.
It used to be a better place for arguments
I think it’s actually a serious problem if the most prominent places for discussion are heavily censored
IMO for some people arguing is a form of intimacy
I have but it kind of goes away after enough years. I was enthusiastic about this book series at one point but that was more than a decade ago, don’t really remember what cliffhanger it was left on even
I see these posts every once in a while and it seems weird the topic of a book not being written still captures people’s attention after so long and so much repeated discussion.
I bet you could do it with ring signatures
a message signed with a ring signature is endorsed by someone in a particular set of people. One of the security properties of a ring signature is that it should be computationally infeasible to determine which of the set’s members’ keys was used to produce the signature
I agree that it’s bad that there’s a false impression of privacy, but I think it would be better to allow this as an extension or something and not include it as a feature in the UI, or at least not on by default. That way people who otherwise wouldn’t bother won’t be tempted to drive themselves crazy looking for imaginary enemies.
Can anyone recommend any cool mods/projects built on top of Minetest?
The output for a given input cannot be independently calculated as far as I know, particularly when random seeds are part of the input.
The system gives a probability distribution for the next word based on the prompt, which will always be the same for a given input. That meets the definition of deterministic. You might choose to add non-deterministic rng to the input or output, but that would be a choice and not something inherent to how LLMs work. Random ‘seeds’ are normally used as part of deterministically repeatable rng. I’m not sure what you mean by “independently” calculated, you can calculate the output if you have the model weights, you likely can’t if you don’t, but that doesn’t affect how deterministic it is.
The so what means trying to prevent certain outputs based on moral judgements isn’t possible. It wouldn’t really be possible if you could get in there with code and change things unless you could write code for morality, but it’s doubly impossible given you can’t.
The impossibility of defining morality in precise terms, or even coming to an agreement on what correct moral judgment even is, obviously doesn’t preclude all potentially useful efforts to apply it. For instance since there is a general consensus that people being electrocuted is bad, electrical cables normally are made with their conductive parts encased in non-conductive material, a practice that is successful in reducing how often people get electrocuted. Why would that sort of thing be uniquely impossible for LLMs? Just because they are logic processing systems that are more grown than engineered? Because they are sort of anthropomorphic but aren’t really people? The reasoning doesn’t follow. What people are complaining about here is that AI companies are not making these efforts a priority, and it’s a valid complaint because it isn’t the case that these systems are going to be the same amount of dangerous no matter how they are made or used.
It’s not actually clear that it only affects huge companies. Much of open source AI today is done by working with models that have been released for free by large companies, and the concern was that the requirements in the bill would deter them from continuing to do this. Especially the “kill switch” requirement made it seem like the people behind the bill were either oblivious to this state of affairs or intentionally wanting to force companies to stop releasing the model weights and only offer centralized services like what OpenAI is doing.