• 0 Posts
  • 94 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle


  • I keep seeing this claim, but never with any independent verification or technical explanation.

    What exactly is listening to you? How? When?

    Android and iOS both make it visible to the user when an app accesses the microphone, and they require that the user grant microphone permission to the app. It’s not supposed to be possible for apps to surreptitiously record you. This would require exploiting an unpatched security vulnerability and would surely violate the App Store and Play Store policies.

    If you can prove this is happening, then please do so. Both Apple and Google have a vested interest in stopping this; they do not want their competitors to have this data, and they would be happy to smack down a clear violation of policy.








  • Probably ~15TB through file-level syncing tools (rsync or similar; I forget exactly what I used), just copying up my internal RAID array to an external HDD. I’ve done this a few times, either for backup purposes or to prepare to reformat my array. I originally used ZFS on the array, but converted it to something with built-in kernel support a while back because it got troublesome when switching distros. Might switch it to bcachefs at some point.

    With dd specifically, maybe 1TB? I’ve used it to temporarily back up my boot drive on occasion, on the assumption that restoring my entire system that way would be simpler in case whatever I was planning blew up in my face. Fortunately never needed to restore it that way.






  • YES.

    And not just the cloud, but internet connectivity and automatic updates on local machines, too. There are basically a hundred “arbitrary code execution” mechanisms built into every production machine.

    If it doesn’t truly need to be online, it probably shouldn’t be. Figure out another way to install security patches. If it’s offline, you won’t need to worry about them half as much anyway.




  • Both.

    The good: CUDA is required for maximum performance and compatibility with machine learning (ML) frameworks and applications. It is a legitimate reason to choose Nvidia, and if you have an Nvidia card you will want to make sure you have CUDA acceleration working for any compatible ML workloads.

    The bad: Getting CUDA to actually install and run correctly is a giant pain in the ass for anything but the absolute most basic use case. You will likely need to maintain multiple framework versions, because new ones are not backwards-compatible. You’ll need to source custom versions of Python modules compiled against specific versions of CUDA, which opens a whole new circle of Dependency Hell. And you know how everyone and their dog publishes shit with Docker now? Yeah, have fun with that.

    That said, AMD’s equivalent (ROCm) is just as bad, and AMD is lagging about a full generation behind Nvidia in terms of ML performance.

    The easy way is to just use OpenCL. But that’s not going to give you the best performance, and it’s not going to be compatible with everything out there.