• 0 Posts
  • 28 Comments
Joined 7 months ago
cake
Cake day: February 16th, 2024

help-circle



  • Entangled particles cannot transmit information between the pairs. That would violate information theory and likely causality as well.

    Quantum networking is instead focused on using extremely robust encryption that can detect interception using entangled pairs of light particles being transmitted together in the fiber optics.

    Edit:

    To elaborate on this, let’s talk about how entanglement works.

    Let’s say I have two identical bags. Into each of the bags I put one of two balls, one colored red, the other blue. I then mix these bags up like a shell game and hand you one.

    Now you can travel anywhere in the universe, and when you open your bag, you know exactly what color you have and what color I have too. No information transmitted, only information inferred.

    Now the quantum part is tricky. Basically when you do this experiment with quantum particles, for example generating two particles, one that must be spun up, the other that must be spin down, there’s a lot of science that “proves” the particles spins are each entirely random, implying that somehow when you examine one you force BOTH particles to pick their opposite spins instantaneously across any distance.

    Now there are two major explanations for how truly random gets ‘picked’ by the universe.

    The first one is Bell’s theorem, or ‘spooky action at a distance’, basically claiming that until you ‘observe’ the particles they both exist in an undetermined state, neither spin up or down, and when you look, the universe forces things to get corrected through some mechanism we don’t understand. Scientists generally prefer this theory because the math is clean and beautiful, and randomness written into the most fundamental levels of the universe fits philosophical ideals nicely (more on that in a minute).

    The primary alternative theory is much more mundane, but has huge implications. Basically this theory, called super determinism, claims there is no such thing as true random, and instead the universe has a set of hidden variables determined from the very beginning of the universe. This implies that time is an illusion and everything is fully deterministic across the entire universe. Scientists generally hate this theory because the math is much harder and uglier, and some interpret this to mean there is no free will.














  • Its dangerous to send goalposts flying around that fast, be careful or you’ll hurt yourself.

    Your response is condescending, arguing from ignorance, and arguing in bad faith. I will reply this time, because once again you’re trying to build an argument on extremely shaky ground and I don’t enjoy people spreading ignorance unchallenged. However I won’t engage any further and feed whatever you think you’re getting from this.

    I haven’t suggested that people should use Obsidian over OSS solutions. I was simply pointing out your argument against Obsidian’s architecture was poorly founded.

    The data you’re insinuating will be lost is pure FUD. While the format isn’t standard markdown, none of the well implemented solutions are, because as you so rightly pointed out, markdown has little to no support for most of these features.

    However, obsidian’s format is well documented and well understood. There are dozens of FOSS plugins and tools for converting or directly importing obsidian data to nearly every other solution. Due to obsidian’s popularity, it’s interoperability this way is often far superior to FOSS solutions’.


  • Content is your notes. In obsidian this is represented by markdown files in a flat filesystem. This format is already cross platform and doesn’t need to be exported.

    Metadata is extracted information from your notes that makes processing the data more efficient. Tags, links, timestamp, keywords, titles, filenames, etc are metadata, stored in the metadata database. When you search for something in obsidian, or view the graph, or list files in a tag etc obsidian only opens the metadata database to process the request. It only opens the file for read/write.

    Does this help?



  • This isn’t really the case though. Obsidian uses a database for metadata, and therefore can extremely rapidly display, search, and find the correct file to open. It generally only opens a handful of files at a time.

    I’ve used obsidian notes repos with hundreds of thousands of notes with no discernable performance impact. Something LogSeq certainly couldn’t do.

    The complaint in the post you’ve linked is a) anecdotal and b) about the import process itself getting slow, which makes sense as obsidian is extracting the metadata.

    I’ll always champion OSS software over proprietary, but claiming this is a huge failing of the obsidian design is just completely false. A metadata database fronting a flat filesystem architecture is very robust.

    Edit: adding link to benchmark. https://www.goedel.io/p/interlude-obsidian-vs-100000