

Well, they have a small point though:
By allowing to upload to a third party image host and allowing to embed images from other sites directly into Lemmy, the load on servers could be reduced while also allowing for larger/higher quality files.
This would obviously come with a downside to privacy.
The current solution would be: upload a highly compressed file to Lemmy and then link to the external high-quality version in the post.
An alternative solution from Lemmy devs could be, to allow external sources and hide the image until the user confirmed they want to load it from an external source. Or just… Add a toggle to settings to automatically load them.
I mean, with federation and all, everything in Lemmy is an ‘external’ source anyway unless you trust every single federated instance. So why not allow external image hosting/file hosting sites as trusted sources.



It’s harder to pronounce internationally, which makes it a weaker global brand.
Also, in the early days of wakeword detection, the detection algorythm actually triggered by the ‘melody’ your voice creates automatically when producing certain vocal sounds. This basically triggered a recording before going through deeper analysis to actually determine, if this was supposed to be an actual request.
For Alexa, the a-ex-a is easy to detect. For “Hey Siri” it’s basically a ‘chime bing bing’ sound in a certain rythm. For Cortana, it’s or-a-a. But Jeeves is only a single syllable, both the J and ‘vs’ are harder to pronounce and basically not relevant for wakeword detection. So the whole wakeword is basically just “eee”, which is a bad wakeword.
So… Just not gold, both technically for reliability and efficiency and economically, not so great for global brand recognition.