• 2 Posts
  • 27 Comments
Joined 1 year ago
cake
Cake day: July 31st, 2023

help-circle

  • Linus is the leader of the kernel project. As a leader, it’s his job to get the maintainers to agree. It’s not Rust’s job to make the C devs stop bullying them.

    If Linus thinks Rust is a good direction, he should show it by actually standing up to Ted and developers like him and making them behave.

    If he doesn’t think it’s a good direction, he should say that too, so the remaining Rust devs can stop wasting time on the project.

    When someone in a niche part of the project steps down like this, that’s a problem with the top-level leadership. Linus’ record on leadership is… mixed. Trending in a good direction the last few years, but this makes me wonder. He can still save this, but he has to want to.


  • Bcachefs has all of this. And it’s supposed to be faster than ZFS and btrfs. In a few years it can really be the golden Linux filesystem recommended for everybody

    ngl, the number of mainline Linux filesystems I’ve heard this about. ext2, ext3, btrfs, reiserfs, …

    tbh I don’t even know why I should care. I understand all the features you mentioned and why they would be good, but i don’t have them today, and I’m fine. Any problem extant in the current filesystems is a problem I’ve already solved, or I wouldn’t be using Linux. Maybe someday, the filesystem will make new installations 10% better, but rn I don’t care.


  • Podman is not yet ready for mainstream, in my experience

    My experience varies wildly from yours, so please don’t take this bit as gospel.

    Have yet to find a container that doesn’t work perfectly well in podman. The options may not be the same. Most issues I’ve found with running containers boil down to things that would be equally a problem in docker. A sample:

    • “rootless” containers are hard to configure. It can almost always be fixed with “–privileged” or some combination of permission flags. This would be equally true for docker; the only meaningful difference is podman tries to push everything into rootless. You don’t have to.
    • network filesystems cause headaches, especially smbfs + sqlite app. I’ve had to use NFS or ext4 inside a network-mounted image for some apps. This problem is identical for docker.
    • container networking–for specific cases–needs to managed carefully. These cases are identical for docker.

    And that’s it. I generally run things once from the podman command line, then use podlet to create a quadlet out of that configuration, something you can’t do with docker. If you are having any trouble with running containers under podman, try the --privileged shortcut, see that it works, and then double back if you think you really need rootless.


  • I haven’t deployed Cloudflare but I’ve deployed Tailscale, which has many similarities to the CF tunnel.

    • Is the tunnel solution appropriate for Jellyfin?

    I assume you’re talking about speed/performance here. The overhead added by establishing the connection is mostly just once at the connection phase, and it’s not much. In the case of Tailscale there’s additional wireguard encryption overhead for active connections, but it remains fast enough for high-bandwidth video streams. (I download torrents over wireguard, and they download much faster than realtime.) Cloudflare’s solution is only adding encryption in the form of TLS to their edge. Everything these days uses TLS, you don’t have to sweat that performance-wise.

    (You might want to sweat a little over the fact that cloudflare terminates TLS itself, meaning your data is transiting its network without encryption. Depending on your use case that might be okay.)

    • I suppose it’s OK for vaultwarden as there isnt much data being transfered?

    Performance wise, vaultwarden won’t care at all. But please note the above caveat about cloudflare and be sure you really want your vaultwarden TLS terminated by Cloudflare.

    • Would it be better to run nginx proxy manager for everything or can I run both of the solutions?

    There’s no conflict between the two technologies. A reverse proxy like nginx or caddy can run quite happily inside your network, fronting all of your homelab applications; this is how I do it, with caddy. Think of a reverse proxy as just a special website that branches out to every other website. With that model in mind, the tunnel is providing access to the reverse proxy, which is providing access to everything else on its own. This is what I’m doing with tailscale and caddy.

    • General recs

    Consider tailscale? Especially if you’re using vaultwarden from outside your home network. There are ways to set it up like cloudflare, but the usual way is to install tailscale on the devices you are going to use to access your network. Either way it’s fully encrypted in transit through tailscale’s network.







    1. Seems like a very reasonable objection to me. I’d guess that most of us Immich users are using it in the first place because it improves the privacy of our photos, and a third party seeing our location data certainly undermines that.
    2. I would have complained had I noticed, so you might be the first one to notice. Immich’s userbase isn’t huge right now, it’s definitely possible.
    3. Featurewise, I’d like: a) a clearly documented way to disable map data leaving my server; b) a set of well-integrated choices (maybe even just two, as long as one of them is something like openstreetmap); c) the current configurability to be well documented.
    4. I’d love it if all such outbound data streams are also documented. Many security and privacy-focused products give you a “quiet” mode of some kind, where you can turn off everything that sends your data somewhere else. It’s a requirement in many enterprise installations.


  • xantoxis@lemmy.worldtoSelfhosted@lemmy.worldNginx 502, ssh not working.
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    4 months ago

    Some troubleshooting thoughts:

    What do you mean when you say SSH is “down”:

    1. connection refused (fail2ban’s activity could result in a connection refused, but a VPN should have avoided that problem, as you said)
    2. connection timeout. probably a failure at the port forwarding level.
    3. connection succeeded but closed; this can happen for a few reasons, such as the system is in an early boot up state. there’s usually a message in this case.
    4. connection succeeded but auth rejected. this can happen if your os failed to boot but came up in a fallback state of some kind.

    Knowing which one of these it is can give you a lot more information about what’s wrong:

    System can’t get past initial boot = Maybe your NAS is unplugged? Maybe your home DNS cache is down?

    Connection refused = either fail2ban or possibly your home IP has moved and you’re trying to connect to somebody else’s computer? (nginx is very popular after all, it’s not impossible somebody else at your ISP has it running). This can also be a port forwarding failure = something’s wrong with your router.

    Connection succeeded + closed is similar to “can’t get past initial boot”

    Auth rejected might give you a fallback option if you can figure out a default username/password, although you should hope that’s not the case because it means anyone else can also get in when your system is in fallback.

    Very few of these things are actually fixable remotely, btw. I suggest having your sister unplug everything related to your setup, one device at a time. Internet router, raspberry pi, NAS, your VM host, etc. Make sure to give them a minute to cool down. Hardware, particularly cheap hardware, tends to fail when it gets hot, and this can take a while to happen, and, well, it’s been hot.

    Here’s a few things with a high likelihood of failing when you’re away from home:

    • heat, as previously mentioned.
    • running out of disk space. Maybe you’re logging too much, throw some more disk in there and tune down the logging. This can definitely affect SSH, and definitely won’t be fixed by a reboot.
    • OOM failures (or other resource leaks). This isn’t likely to affect your bare metal ssh, but it could. Some things leak memory, and this can lead to cascading process destruction by the OS. In this scenario you’d probably be able to connect to things in the first few minutes after a reboot, though.
    • shitty cabling. Sometimes stuff just falls out of the socket, if it wasn’t plugged in perfectly to begin with. (Heat can also contribute to this one.)
    • reliance on a cloud service that’s currently down. (This can include: you didn’t pay the bill.) Hopefully your OS boot doesn’t fail due to a cloud service, but I’ve definitely seen setups that could.







  • So an option that is literally documented as saying “all files and directories created by a tmpfiles.d/ entry will be deleted”, that you knew nothing about, sounded like a “good idea”?

    Bro, if it sounded like a good idea to someone, you didn’t fucking warn them enough. Don’t put this on them without considering what you did to confuse them.

    Also, nfn, the systemd documentation is a nightmare to read through, even if you know exactly what you’re looking for.

    (I’m still gonna keep using systemd because it’s better than the alternatives, though. OP, don’t write stuff off because 1 guy is a dick.)