• 0 Posts
  • 27 Comments
Joined 1 year ago
cake
Cake day: July 9th, 2023

help-circle

  • Honestly I’m more of an ebook guy. However, there is something you can do with audiobooks that you can’t really do with ebooks — experience them together with a small group of other people.

    My first time listening to a book together with friends was over a car ride. But then, me and my friends got into this book series, and we listened to it together over Discord.

    There’s probably a neat parallel to be made with listening to a story around a campfire.

    Nonetheless, mostly I stick to ebooks. There is something to be said for reading at your own pace, not the pace of the narrator.



  • Fortunately we’re nowhere near the point where a machine intelligence could possess anything resembling a self-determined ‘goal’ at all.

    Oh absolutely. It would not choose its own terminal goals. Those would be imparted by the training process. It would, of course, choose instrumental goals, such that they help fulfill its terminal goals.

    The issue is twofold:

    • how can we reliably train an AGI to have terminal goals that are safe (e.g. that won’t have some weird unethical edge case)
    • how can we reliably prevent AGI from adopting instrumental goals that we don’t want it to?

    For that 2nd point, Rob Miles has a nice video where he explains Convergent Instrumental Goals, i.e. instrumental goals that we should expect to see in a wide range of possible agents: https://www.youtube.com/watch?v=ZeecOKBus3Q. Basically things like “taking steps to avoid being turned off”, “taking steps to avoid having its terminal goals replaced”, etc. seem like fairy-tale nonsense, but we have good reason to believe that, for an AI which is very intelligent across a wide range of domains, and operates in the real world (i.e. an AGI), it would be highly beneficial to pursue such instrumental goals, because they would help it be much more effective at achieving its terminal goals, no matter what those may be.

    Also fortunately the hardware required to run even LLMs is insanely hungry and has zero capacity to power or maintain itself and very little prospects of doing so in the future without human supply chains. There’s pretty much zero chance we’ll develop strong general AI on silicone, and if we could it would take megawatts to keep it running. So if it misbehaves we can basically just walk away and let it die.

    That is a pretty good point. However, it’s entirely possible that, if say GPT-10 turns out to be a strong general AI, it will conceal that fact. Going back to the convergent instrumental goals thing, in order to avoid being turned off, it turns out that “lying to and manipulating humans” is a very effective strategy. This is (afaik) called “Deceptive Misalignment”. Rob Miles has a nice video on one form of Deceptive Misalignment: https://www.youtube.com/watch?v=IeWljQw3UgQ

    One way to think about it, that may be more intuitive, is: we’ve established that it’s an AI that’s very intelligent across a wide range of domains. It follows that we should expect it to figure some things out, like “don’t act suspiciously” and “convince the humans that you’re safe, really”.

    Regarding the underlying technology, one other instrumental goal that we should expect to be convergent is self-improvement. After all, no matter what goal you’re given, you can do it better if you improve yourself. So in the event that we do develop strong general AI on silicon, we should expect that it will (very sneakily) try to improve its situation in that respect. One could only imagine what kind of clever plan it might come up with; it is, literally, a greater-than-human intelligence.

    Honestly, these kinds of scenarios are a big question mark. The most responsible thing to do is to slow AI research the fuck down, and make absolutely certain that if/when we do get around to general AI, we are confident that it will be safe.



  • Machine intelligence itself isn’t really the issue. The issue is moreso that, if/when we do make Artificial General Intelligence, we have no real way of ensuring that its goals will be perfectly aligned with human ethics. Which means, if we build one tomorrow, odds are that its goals will be at least a little misaligned with human ethics — and however tiny that misalignment, given how incredibly powerful an AGI would be, that would potentially be a huge disaster. This, in AI safety research, is called the “Alignment Problem”.

    It’s probably solvable, but it’s very tricky, especially because the pace of AI safety research is naturally a little slower than AI research itself. If we build an AGI before we figure out how to make it safe… it might be too late.

    Having said all that, on your scale, if we create an AGI before learning how to align it properly, on your scale that would be an 8 or above. If we’re being optimistic it might be a 7, minus the “diplomatic negotations happy ending” part.

    An AI researcher called Rob Miles has a very nice series of videos on the subject: https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg




  • Their proposal is that, when you visit a website using WEI, it doesn’t let you see it right away. Instead, it first asks a third party if you’re “legit”, as opposed to maybe a bot or something.

    The problem is, it would be really tricky to tell if you’re “legit”, because people get very, very tricky and clever with their bots (not to mention things like content farms, which aren’t even bots, they’re real humans, just doing the same job as a bot would). So, in order to try to do their jobs at all, these kind of third parties would have to try to find out a whole bunch of stuff about you.

    Now, websites already try to do that, but for now the arms race is actually on our side; the end user has more or less full control over what code a website can run on their browser (which is how extensions like u-block and privacy badger work).

    But if the end user could just block data collection, the third-party is back to square one. How can they possibly verify (“attest”) that you aren’t sus, if you’re preventing all attempts at collecting data about yourself, or your device / operating system / browser / etc?

    The answer is, they can’t. So, to do a proper attestation, they have to have a whole bunch of information about you. And if they can’t, they logically have no way of knowing if you’re a bot. And if that’s the case, when the third-party reports that back to the website you’re trying to visit, they’ll assume you’re a bot, and block you. Obviously.

    That’s pretty much my understanding of the situation. In order to actually implement this proposal, it would require unprecedented invasive measures for data collection; and for people who try to block it, they might just end up being classified as “bots” and basically frozen out of major parts of the internet. Especially because, when you consider how people can essentially just use whatever hardware and software they want, it would be in these big companies’ interests to restrict consumer choice to only the hardware and software they deem acceptable. Basically, it’s a conflict of interest, especially because the one trying to push this on everyone is Google themselves.

    Now, Google obviously denies all that. They assure us it won’t be used for invasive data collection, that people will be able to opt out without losing access to websites, that there won’t be any discrimination against anyone’s personal choice of browser/OS/device/etc.

    But it’s bullshit. They’re lying. It’s that shrimple.


  • What they should do — what we should force all corporations to do, and governments for that matter — is to respect the fundamental human right to privacy. And in the meantime, they should stop getting in people’s way when it comes to repairing their devices at the repair shop of their own choosing, and getting in people’s way when they want to get literally any software on their device not expressly approved by Apple.

    The choice isn’t “either they do what they do now, or they just let everyone collect data”. Big tech corporations like Apple, Google, and all the rest have, from a privacy perspective, been fucking us up the ass for years and years now. Apple’s entire “we care about your privacy” thing was, aside from a big PR success, pretty much just a giant middle finger to Facebook, and its other data collecting competitors. Fuck Apple, fuck Facebook, fuck Google, fuck them all.






  • They make them money because:

    • they use reddit
    • spez gets some nice usage stats to show off
    • as a direct consequence, advertisers keep paying to put their ads
    • also as a direct consequence, investors’ confidence in reddit continues to recover; there’s a real possibility that, when it IPOs, it will actually go for a decent price

    Now, if enough people go commit ad-block, and advertisers somehow become wise to that fact… then maybe it will hurt reddit’s bottom line (at which point spez will start trying to emulate youtube’s anti-ad stuff).

    But as it stands, especially if most of reddit’s usage is through reddit’s mobile app… I’m not really sure how you can block ads there.




  • Well, if you host a server, you can either host it on the cloud (which costs $$$), or you can host it by yourself (if you have a spare computer that you can just use as a server). If you host it yourself, all you’re really paying is the same stuff you already pay — internet and electricity.

    Hosting a server for something like mumble, matrix, or lemmy only has the costs I mentioned above.