• 0 Posts
  • 17 Comments
Joined 9 months ago
cake
Cake day: February 10th, 2024

help-circle
  • My best guess: whatever they’re filing now was so exhaustively researched that it took months to prepare the strongest case they’re able to make, possibly delayed by the lawyers working on several other cases. Plus waiting until sales have dried up can maximize damages.

    Another possibility is that Nintendo/TPC is planning to make some big Pokémon announcements soon and wants to target this shortly before their own new games to reduce competition. Palworld might seem like more of a threat to the execs now that Pokémon is nearing a major release than it was in the middle of a long drought for the series.



  • A standard called SystemReady exists. For the systems that actually follow its standards, you can have a single ARM OS installation image that you copy to a USB drive and can then boot through UEFI and run with no problems on an Ampere server, an NXP device, an Nvidia Jetson system, and more.

    Unfortunately it’s a pretty new standard, only since 2020, and Qualcomm in particular is a major holdout who hasn’t been using it.

    Just like x86, you still need the OS to have drivers for the particular device you’re installing on, but this standard at least lets you have a unified image, and many ARM vendors have been getting better about upstreaming open-source drivers in the Linux kernel.


  • To the contrary, I would expect the sample to skew more towards people who have a heavily customized X session and strong opinions about window managers while drastically underrepresenting average GNOME users who stick with the default Wayland session. Someone who likes their custom setup can still be waiting for a Wayland equivalent while casual Ubuntu users have been defaulted to Wayland on new non-nvidia installs since early 2021.




  • For years I’ve been using KeepassXC on desktop and Keepass2Android on mobile. Rather than sync the kdbx file between my devices, I have each device access it through the network. Either via sftp, smb, or nfs, but regardless I need to connect to my home’s VPN to access it when away from home since I don’t directly expose those things to the outside world.

    I used to also keep a second copy of the website-tied passwords in Firefox Sync, but recently tried migrating that to Proton Pass because I thought the PIN feature might help, then ultimately decided to move away from that too and start using the KeepassXC-Browser plugin instead. I considered Bitwarden too but haven’t tried it out yet, was somewhat deterred by seeing people say its UI seems very outdated.


  • There’s only one case I’ve found where Wi-Fi use seems acceptable in IoT: ESPHome. It’s open-source firmware for microcontrollers that makes DIY IoT sensors and controls accessible over LAN without phoning home to whatever remote server, without trying to make anything accessible over the Internet, and without breaking in any way if the device has no route to the Internet.

    I still wouldn’t call Wi-Fi use ideal even there; mesh can help in larger homes and Z-Wave/Zigbee radios tend to be more power efficient, though ESP32 isn’t exactly suited for a battery-powered device that’s expected to run 24/7 regardless.


  • Yes.

    My home server has dropbear-initramfs installed so that after reboot I can access the LUKS decryption prompt over SSH. The one LUKS partition contains a btrfs filesystem with both rootfs and home as subvolumes. For all the other drives attached to that system, I use ZFS native encryption with a dataset that decrypts with a keyfile from that rootfs and I have backups of an encrypted copy of that keyfile.

    I don’t think there’s a substantial performance impact but I’ve never bothered benchmarking.



  • I’m not sure if this is required. Any decent e-mail server uses TLS to communicate these days, so everything in transit is already encrypted.

    In transit, yes, but not end-to-end.

    One feature that Proton advertises: when you send an email from one Proton mail account to another Proton address, the message is automatically encrypted such that (assuming you trust their client-side code for webmail/bridge) Proton’s servers never have access to the message contents for even a moment. When incoming mail hits Proton’s SMTP server, Proton technically could (but claims not to) log the unencrypted message contents before encrypting it with the recipient’s public key and storing it. That undermines Proton’s promise of Proton not having access to your emails. If both parties involved in an email conversation agree to use PGP encryption then they could avoid that risk, and no mail server on either end would have access to anything more than metadata and the initial exchange of public keys, but most humans won’t bother doing that key exchange and almost no automated mailers would.

    Some standard way of automatically asking a mail server “Does user@proton.me have a PGP public key?” would help on this front as long as the server doesn’t reject senders who ignore this feature and send SMTP/TLS as normal without PGP. This still requires trusting that the server doesn’t give an incorrect public key but any suspicious behavior on this front would be very noticeable in a way that server-side logging would not be. Users who deem that unacceptable can still use a separate set of PGP keys.


  • They say the reason for needing their bridge is the encryption at rest, but I feel like the better way to handle wanting to push email privacy forward would be to publish (or better yet coordinate with other groups on drafting) a public standard that both clients and competing email servers could adopt for an email syncing protocol for that sort of zero-access encryption that requires users give their client a key file. A bridge would be easier to swallow as a fallback option until there’s wider client support rather than as the only way.

    A similar standard for server-to-server communication, like for automatic pgp key negotiation, would be nice too.

    Still, Proton has a easy to access data export that doesn’t require a bridge client or subscription or anything. I think that’s required by GDPR. It’s manual enough to not be an effective way to keep up-to-date backups in case you ever abruptly lose access but it’s good enough to handle wanting to migrate to another provider.


  • Compared to simplelogin (or proton pass aliases, addy, firefox relay, etc), one other downside of a catchall is in associations across accounts. Registering with a @passmail.net address implies that I use Proton; registering with random-string@mydomain.org implies I have access to that domain. If 10 data breach leaks have exactly one account matching the latter pattern then that’s a strong sign the domain isn’t shared. If one breached site has my mailing address, my real identity can be tied to all the others.



  • Stylus/handwriting oriented note taking. Stuff like Samsung Notes or Goodnotes (or OneNote, though it does a lot more) in the Android space, or e-ink options like Remarkable’s stock software.

    If I just want to use a keyboard for everything I have great FOSS options like Joplin and Standard Notes, but when I want to use a pen instead it feels like no other freedom-respecting option seem to even remotely approach the usability of just sticking with real ink and moleskine-like paper notebooks.

    Even someone willing to pay an upfront fee for proprietary apps will struggle to find good options that allow syncing and reading (let alone editing) your notes on other devices/platforms without resorting to a monthly subscription.


  • Something I’ve noticed that is somewhat related but tangential to your problem: The result I’ve always gotten from using compose files is that container names and volume names get assigned names that contain a shared prefix by default. I don’t use docker and instead prefer podman but I would expect both to behave the same on this front. For example, when I have a file at nextcloud/compose.yml that looks like this:

    volumes:
      nextcloud:
      db:
    
    services:
      db:
        image: docker.io/mariadb:10.6
        ...
      app:
        image: docker.io/nextcloud
        ...
    

    I end up with volumes named nextcloud_nextcloud and nextcloud_db, with containers named nextcloud_db and nextcloud_app, as long as neither of those services overrides this behavior by specifying a container_name. I believe this prefix probably comes from the file-level name: if there is one and the parent directory’s name otherwise.

    The reasons I adjust my own compose files to be different from the image maintainer’s recommendation include: to accommodate the differences between podman and docker, avoiding conflicts between the exported listen ports, any host filesystem paths I want to mount in the container, and my own preferences. The only conflict I’ve had with other containers there is the exported port. zigbee2mqtt, nextcloud, and freshrss all suggest using port 8080 so I had to change at least two of them in order to run all three.


  • I have configured custom Android kernel builds to enable more USB drivers, enable module support, and tweak various other things. For one tangible example of the result: I could plug in a USB Wi-Fi adapter and use it to simultaneously connect to another Wi-Fi network with the internal NIC while also sharing my own AP over USB. On an Android device of all things. I have also adjusted kernel builds for SBCs (like Pi clones) to get things working at all.

    I have never seen any reason to configure a custom kernel for my own desktop/laptop systems. Default builds for the distros I’ve used have been fine for me; if I’m ever dissatisfied with anything it’s the version number rather than the defconfig. The RHEL/Rocky kernel omits a few features I want (like btrfs) but I’d rather stick to other distros on personal systems than tweak a distro that isn’t even meant for tweaking.