• 0 Posts
  • 46 Comments
Joined 1 year ago
cake
Cake day: October 21st, 2023

help-circle
  • I have a different experience. There was one thread which linked to a github issue. The issue said some blobs don’t have source code. Ironically when I went on to check, the blobs mentioned in the issue had source code, but there were other blobs which seemed to miss the source or build instructions.

    I would love to have an independent audit to put this issue at rest. All that happens is more and more noise and no resolution. I am not a programmer so can’t really help here.







  • Hey I understood what you meant. The result that you are trying to achieve is very close to the browser caching normally present is what I meant. When you zoom in it will only load that area. And I don’t think you can specify the number of tiles to be a specific number, since the zoom levels are not linear.

    The offline leaflet I shared in the previous comment actually does the same thing you want to achieve. The difference is the offline mode is discarded immediately when the system is back online. So that library could be modified to incorporate the time dependency and users visiting a point again I specified in the last comment, at least in theory.

    Regarding OSM data, there are zip files available for downloading. Geofabrik and openstreetmap.fr are examples. Another tool is Protomaps, where you can download by drawing a polygon. But these are not going to be the ideal solution for a product like Immich.

    By the way I saw your update. Great job on following up and providing a fix for others. I really really appreciate it.


  • If you are asking about vector maps, I am not really sure, because I have no experience with it. So can’t really comment on that. On raster maps, as you already know every tile is a PNG. The behaviour you described is very similar to the client side caching that usually happens in the browser. Depending on the coordinates in the viewport and zoom level the server provides the tiles.

    Usually to save the map most offline map making tools will ask you to draw a rectangle and select the required zoom levels. In an interactive map, the rectangle is the viewport of the device. So there can be a feature which will download and store the tiles around a specific gps location for a fixed geographical area. That should be doable without much issue. But in this case that may not be a good idea.

    If you visualise all zoom levels stacked over each other, the images need to be retrieved when the user zooms into a point the geographical area will not stay the same. Smaller geographical area is only needed with higher zoom levels. If we only take all the tiles that get downloaded in every layer, it may produce a shape similar to an inverted pyramid. So saving the images as a user zooms in for the first time, may be the best idea.

    Then the saved tiles need to be used again when users zoom in the same area. Also these tiles need not be updated frequently and maybe even once in every 3 months might be enough, that too only when the user zooms in again in that area.

    This can be a little tricky as almost all the tools that create offline maps do it for a fixed area and selected zoom levels, every point in that area gets equal priority. But in this case the point is the important element. The area nearby may not be relevant at all. So that is the part that needs some exploration.


  • I read through your comments and the reply from devs regarding OSM. I will add a few points that can be part of the feature request. I have some experience dealing with maps, and my understanding is you can set up an offline version of OSM, which will get updated only when required.

    leaflet.offline is a library which provides a similar functionality. I think with some modifications this can be implemented to significantly reduce the load on OSM that using it directly.

    Even with a very large zoom level say 11 to 15, a large area of maps takes like a few hundred MBs. We once cached the entire region of California with all the details and it was around 240 MB IIRC. But Immich does not need this much details and it is possible to restrict zoom levels to certain details.

    For someone self hosting several hundreds of GBs of photos, this should be doable without using too much storage. I think the problem will be that this is a huge engineering effort. Depending on the priority of the feature it may not be easy to do this.

    There is a site called Switch2OSM which details almost everything you need to know. The previous link is on how to serve map tiles on your own. Again it is a daunting task and not suitable for everyone.

    If anyone needs a live update of OSM as things get added, look into the commercial offerings.

    In conclusion, it is possible to include a highly optimised version of OSM, instead of putting their servers under heavy load. The catch is, it is not easy and will need a huge engineering effort. I think developers should take a call on this.


  • So many people freaked out by privacy scandals, that made it a billion worth business. first with a massive amount of VPN services, some of them even from the same company and giving them different brands, browsers everywhere with same situation , then self hosting which it seems multiple developers are putting the eye on it as some fresh juice . And FUTO eyes specially

    Can you elaborate this a bit more? Are the first few points about Proton, or in general? And I guess in the case of browsers you meant Chromium. Also I’m not really sure about what you meant regarding self hosting.








  • Replacing a human with any form of tech has been a long standing practice. Usually in this scenario the profitability or the efficiency takes a known pattern. Unfortunately what you said is the exact way the market always operated in the past, and will be operating in the future.

    The general pattern is a new tech is invented or a new opportunity is identified, then a bunch of companies get into the market as competing entities. They offer competing prices to customers in an attempt to gain market dominance.

    But the problem starts when low profit drives some companies to a situation where either they have to go bust or dissolve the wing, or sell the company to a competitor. Usually after this point a dominant company will emerge in a market segment. Then the monopolies are created. After this point companies either increase the price or exploit customers to get more money, and thereby start making profits. This has been the exact pattern in tech industries for several decades.

    In the case of AI also, this is why companies are racing to capture market dominance. Early adopters always get a small advantage and help them get prominence in the segment.



  • I completely agree with this. I work as a User Experience researcher and I have been noticing this for some time. I’m not a traditional UX person, but work more at the intersection of UX and Programming. I think the core problem when it comes to discussion about any software product is the people talking about it, kind of assuming everyone else functions the same.

    What you mentioned here as a techie, in simple terms is a person who uses or has to use the computer and file system everyday. They spend a huge amount of time with a computer and slowly they organise stuff. And most of the time they want more control over their stuff, and some of them end up in Linux based systems, and some find alternative ways.

    There are two other kinds of people. One is a person who uses the computer everyday but is completely limited to their enterprise software. Even though they spend countless hours on the computer, they really don’t end up using the OS most of the time. A huge part of the service industry belongs to this group. Most of the time they have a dedicated IT department who will take care of any issue.

    The third category is people who rarely use computers. Means they use it once or twice in a few days. Almost all the people with non-white collar jobs belong to this category. This category mainly uses phones to get daily stuff done.

    If you look at the customer base of Microsoft, it’s never been the first. Microsoft tried really hard with .NET in the Balmer era, and even created a strong base at that time, but I am of the opinion that a huge shift happened with wide adoption of the Internet. In some forum I recently saw someone saying, TypeScript gave Microsoft some recognition and kept them relevant. They made some good contributions also.

    So as I mentioned the customer base was always the second and third category. People in these categories focus only on getting stuff done. Bare minimum maintenance and get results by doing as little as possible. Most of them don’t really care about organising their files or even finding them. Many people just redownload stuff from email, message apps, or drives, whenever they need a file. Microsoft tried to address this by indexed search inside the OS, but it didn’t work out well because of the resource requirements and many bugs. For them a feature like Recall or Spotlight of Apple is really useful.

    The way Apple and even Android are going forward is in this direction. Restricting the user to the surface of the product and making things easy to find and use through aggregating applications. The Gallery app is a good example. Microsoft knew this a long back. ‘Pictures’, ‘Documents’ and all other folders were just an example. They never ‘enforced’ it. In earlier days people used to have separate drives for their documents because, Windows did get corrupted easily and when reinstalling only the ‘C:’ drive needs to be formatted. Only after Microsoft started selling pre-installed Windows through OEMs, they were able to change this trend.

    Windows is also pushing in this same direction. Limiting users to the surface, because the two categories I mentioned don’t really ‘maintain’ their system. Just like in the case of a car, some people like to maintain their own car, and many others let paid services to take care of it. But when it comes to ‘personal’ computers, with ‘personal’ files, a ‘paid’ service is not an option. So this lands on the shoulders of the OS companies as an opportunity. Whoever gives a better solution people will adopt it more.

    Microsoft is going to land in many contradictions soon, because of their early widespread adoption of AI. Their net zero global emission target is a straightforward example of this.