Yeah, this is actually a pretty great application for AI. It’s local, privacy-preserving and genuinely useful for an underserved demographic.
One of the most wholesome and actually useful applications for LLMs/CLIP that I’ve seen.
Yeah, this is actually a pretty great application for AI. It’s local, privacy-preserving and genuinely useful for an underserved demographic.
One of the most wholesome and actually useful applications for LLMs/CLIP that I’ve seen.
Ideally you want something that gracefully degrades.
So, my media library is hosted by Plex/Jellyfin and a bunch of complex firewall and reverse proxy stuff… And it’s replicated using Syncthing. But at the end of the day it’s on an external HDD that they can plug into a regular old laptop and browse on pretty much any OS.
Same story for old family photos (Photoprism, indexing a directory tree on a Synology NAS) and regular files (mostly just direct SMB mounts on the same NAS).
Backups are a bit more complex, but I also have fairly detailed disaster recovery plans that explain how to decrypt/restore backups and access admin functions, if I’m not available (in the grim scenario, dead - but also maybe just overseas or otherwise indisposed) when something bad happens.
Aside from that, I always make sure that all of all the selfhosting stuff in my family home is entirely separate from the network infra. No DNS, DHCP or anything else ever runs on my hosting infra.
It would be better to have this as a FUSE filesystem though - you mount it on an empty directory, point the tool at your unorganised data and let it run its indexing and LLM categorisation/labelling, and your files are resurfaced under the mountpoint without any potentially damaging changes to the original data.
The other option would be just generating a bunch of symlinks, but I personally feel a FUSE implementation would be cleaner.
It’s pretty clear that actually renaming the original files based on the output of an LLM is a bad idea though.
(6.9-4.2)/(2024-2018) = 0.45 “version increments” per year.
4.2/(2018-1991) = 0.15 “version increments” per year.
So, the pace of version increases in the past 6 years has been around triple the average from the previous 27 years, since Linux’ first release.
I guess I can see why 6.9 would seem pretty dramatic for long-time Linux users.
I wonder whether development has actually accelerated, or if this is just a change in the approach to the release/versioning process.
If you include ChromeOS that’s very likely.
You can restrict what gets installed by running your own repos and locking the machines to only use those (either give employees accounts with no sudo access, or have monitoring that alerts when repo configs are changed).
So once you are in that zone you do need some fast acting reactive tools that keep watch for viruses.
For anti-malware, I don’t think there are very many agents available to the public that work well on Linux, but they do exist inside big companies that use Linux for their employee environments. For forensics and incident response there is GRR, which has Linux support.
Canonical may have some offering in this space, but I’m not familiar with their products.
Tbf 500ms latency on - IIRC - a loopback network connection in a test environment is a lot. It’s not hugely surprising that a curious engineer dug into that.
I don’t think it’s necessarily a bad thing that an AI got it wrong.
I think the bigger issue is why the AI model got it wrong. It got the diagnosis wrong because it is a language model and is fundamentally not fit for use as a diagnostic tool. Not even a screening/aid tool for physicians.
There are AI tools designed for medical diagnoses, and those are indeed a major value-add for patients and physicians.
Power management is going to be a huge emerging issue with the deployment of transformer model inference to the edge.
I foresee some backpedaling from this idea that “one model can do everything”. LLMs have their place, but sometimes a good old LSTM or CNN is a better choice.