• 0 Posts
  • 30 Comments
Joined 1 year ago
cake
Cake day: July 18th, 2023

help-circle









  • vyatta and vyatta-based (edgerouter, etc) I would say are good enough for the average consumer.

    WTF? What galaxy are you from? Literally zero average consumers use that. They use whatever router their ISP provides, is currently advertised on tech media, or is sold at retailers.

    I’m not talking about budget routers. I’m talking about ALL software running on consumer routers. They’re all dogshit closed source burn and churn that barely receive security updates even while they’re still in production.

    Also you don’t need port forwarding and ddns for internal routing. … At home, all traffic is routed locally

    That is literally the recommended config for consumer Tailscale and any mesh VPN. Do you even know how they work? The “external dependency” you’re referring to — their servers — basically operate like DDNS, supplying the DNS/routing between mesh clients. Beyond that all comms are P2P, including LAN access.

    Everything else you mention is useless because Tailscale, Nebula, etc all have open source server alternatives that are way more robust and foolproof to rolling your own VPS and wireguard mesh.

    My argument is that “LAN access” — with all the “smart” devices and IoT surveillance capitalism spyware on it — is the weakest link, and relying on mesh VPN software to create a VLAN is significantly more secure than relying on open LAN access handled by consumer routers.

    Just because you’re commenting on selfhosted, on lemmy, doesn’t mean you should recommend the most complex and convoluted approach, especially if you don’t even know how the underlying tech actually works.



  • What is the issue with the external dependency? I would argue that consumer routers have near universal shit security, networking is too complex for the average user, and there’s a greater risk opening up ports and provisioning your own VPN server (on consumer software/hardware). The port forwarding and DDNS are essentially “external dependencies”.

    Mesh VPN clients are all open source. I believe Tailscale are currently implementing a feature where new devices can’t connect to your mesh without pre-approval from your own authorized devices, even if they pass external authentication and 2FA (removing the dependency on tailscale servers in granting authorization, post-authentication).


  • Not really. The problem with FOSS licensing is that it was too altruistic, with the belief that if enough users and corporations depended on the code, the community would collectively do the work necessary to maintain the project. Instead, capitalism chose to exploit FOSS as free labor most of the time, without any reciprocal investment. They raise an enormous amount of issues, and consume a large amount of FOSS developer time, without paying their own staff to fix the bugs they need resolved — in the software their products depend on. At that point the FOSS developer is no longer a FOSS developer, and instead is the unpaid slave labor of a corporation. Sure, FOSS devs could just ignore external inputs, but that’s not easy to do when you’ve invested years of your life in a project. Exploiting kindness may be legal, but it should never be justified or tolerated.

    Sure, FOSS licenses legally permit that kind of use, but just because homeless shelters allow anyone to eat their food, and sleep in their beds, that doesn’t make the rich man who exploits that charity ethically or morally justified. The rich man who exploits that charity (i.e. free labor), and offers nothing in return, is a scummy dog cunt; there are no two ways about it. The presence of lecherous parasites can destroy the entire charity; they can mean the difference between sustainability and burnout.

    FOSS should always be free for all personal, free, and non profit use, but once someone in the chain starts depending on FOSS to generate income and profit, some of that profit should always be reinvested in those dependencies. That’s what FOSS is now learning; to reject the exploitation and greed of lecherous parasites.



  • I believe this is what some compression algorithms do if you were to compress the similar photos into a single archive. It sounds like that’s what you want (e.g. archive each day), for immich to cache the thumbnails, and only decompress them if you view the full resolution. Maybe test some algorithms like zstd against a group of similar photos vs individually?

    FYI file system deduplication works based on file content hash. Only exact 1:1 binary content duplicates share the same hash.

    Also, modern image and video encoding algorithms are already the most heavily optimized that computer scientists can currently achieve with consumer hardware, which is why compressing a jpg or mp4 offers negligible savings, and sometimes even increases the file size.