A person with way too many hobbies, but I still continue to learn new things.

  • 1 Post
  • 60 Comments
Joined 1 year ago
cake
Cake day: June 7th, 2023

help-circle

  • Are you sure about that? Ever hear about this supposed predictable network names in recent linux versions? Yeah those can change too. I was trying to set up a new firewall with two internal NICs plus a 4-port card, and they kept moving around. I finally figured out that if I cold-booted the NICs would come up in one order, and if I warm-booted they would come up in a completely different order (like the ports on the card would reverse which order they were detected). This was completely the fault of systemd because when I installed an older linux and used udev to map the ports, it worked exactly as predicted. These days I trust nothing.


  • Shdwdrgn@mander.xyztoSelfhosted@lemmy.worldHelp with ZFS Array
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    11 days ago

    OP – if your array is in good condition (and it looks like it is) you have an option to replace drives one by one, but this will take some time (probably over a period of days). The idea is to remove a disk from the pool by its old name, then re-add the disk under the corrected name, wait for the pool to rebuild, then do the process again with the next drive. Double-check, but I think this is the proper procedure…

    zpool offline poolname /dev/nvme1n1p1

    zpool replace poolname /dev/nvme1n1p1 /dev/disk/by-id/drivename

    Check zpool status to confirm when the drive is done rebuilding under the new name, then move on to the next drive. This is the process I use when replacing a failed drive in a pool, and since that one drive is technically in a failed state right now, this same process should work for you to transfer over to the safe names. Keep in mind that this will probably put a lot of strain on your drives since the contents have to be rebuilt (although there is a small possibility zfs may recognize the drive contents and just start working immediately?), so be prepared in case a drive does actually fail during the process.


  • That is definitely true of zfs as well. In fact I have never seen a guide which suggests anything other than using the names found under /dev/disk/by-id/ or /dev/disk/by-id/uuid and that is to prevent this very problem. If the proper convention is used then you can plug the drives in through any available interface, in any order, and zfs will easily re-assemble the pool at boot.

    So now this begs the question… is proxmox using some insane configuration to create drive clusters using the name they happen to boot up with???








  • If you want to do it right, try to get a static IP (you may need to get a business account). If your provider doesn’t provide IPv6 to static IPs, go to some place like Hurricane Electric and get a free IPv6 range pointed to your IPv4 static address.

    Alternatively, you might do a search for any DDNS services that provide IPv6 (I’m not sure if any do?), then that service will fllow your residential address when it changes. Either way I think you’ll have some additional costs you need to weigh against your current hosting provider.


  • Who said anything about it being standard? I said I know this CAN happen, and I said it was quite some time ago. We can only hope this insanity isn’t still in practice anywhere, but I learned long ago that expecting a corporation to NOT do foolish things will give me the same disappointing results as expecting money to come out of my ass. If there’s a manager involved, then something on the tech side is going to get fucked up in the name of saving a buck. Therefore I cannot just assume OP gets a normal NAT address, nor can I assume they have any other firewall type device between them and the internet. With limited data, the best I can do is try and provide some general information, hopefully encourage them to ask more questions or provide more specific information, and just hope they don’t have a ridiculously stupid ISP that makes things needlessly complicated.


  • Most of my experience is with iptables, but yeah, I think until you start adding rules nothing is implicitly denied? Once you enable a couple of initial rules then you should have good blocking from the outside while allowing internal traffic to connect freely. It doesn’t get in your way until you start using it, but then it doesn’t take much to get it going.


  • You’re right, it doesn’t make any sense. And it didn’t make any sense at the time either. After setting up the router with a laptop, I moved the connection to the firewall but it refused to connect. When I finally got ahold of tech support they said the connection locks into the first machine that logs in and they had to release it so I could connect the new machine. And just like that the firewall was given a routable IP address and connected to the internet. Stupidest thing I ever heard of, but that’s how they were set up. Now this was around 15+ years ago and I would certainly hope nobody is doing that crap today, but apparently that was their brilliant method of limiting how many devices could get online at once.


  • What are you talking about? You’re assuming that every residential router is going to have some kind of firewall enabled by default (they don’t). Sure, if OP has a router that provides a basic firewall type service then it will likely block all incoming unauthorized traffic. However OP is specifically talking about a linux-based firewall and hasn’t specified if they have a router-based firewall service in place as well so we can only provide info on the firewall they specified. And if you look at UFW, the default configuration is to allow outgoing traffic and block all but a very few defined incoming ports.

    You’re also making the assumption that OP is using NAT, when that is not always the case for all ISPs. Some are really annoying with their setup in that they give a routable IP to the first computer that connects and don’t allow any other connections (I had that setup once with Comcast). In this case, you wouldn’t even need to define port-forwarding to get directly to OP’s computer – and any services they might be running. This particular scenario is especially dangerous for home computers and I really hope no legitimate ISP is still following a practice like this, however I don’t take anything for granted.

    Regardless of what other equipment OP has, UFW is going to provide FAR better defaults and configurability when compared to a residential router that is simply set up to create the fewest support calls to their ISP.



  • Sure it CAN be configured, but the typical policy of firewalls is to start from a position of blocking everything. From what I’ve seen, on Linux the standard starting point is blocking all incoming and allowing all outgoing. On Windows the default seems to be blocking everything in both directions. Sure you could start with a policy of allowing everything and block only selected ports, but what good is that when you can’t predict what ports an attacker might come from?


  • Shdwdrgn@mander.xyztoLinux@lemmy.mlFirewalls: what SHOULD I block?
    link
    fedilink
    English
    arrow-up
    80
    arrow-down
    2
    ·
    2 months ago

    You’ve got it backwards. A firewall blocks everything, then you open up the ports you want to use. A standard config would allow everything going out, and block everything coming in (unless you initiated that connection, then it is allowed).

    So the question you should be asking, is what services do you think you’re going to be running on your desktop that you plan to allow anyone on the internet to get to?


  • Shdwdrgn@mander.xyztoLinux@lemmy.mlA word about systemd
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    2
    ·
    edit-2
    2 months ago

    From my own experience it was more about being a solution in search of a problem. I see some comments about how the old init system was so horribly broken, and yet the reality was it worked perfectly fine for all but some very niche situations. The only advantage I have ever seen with systemd is that it’s very good at multitasking the startup/shutdown processes, but that certainly wasn’t the case when it first arrived. For example I had a raspberry pi that booted in 15 seconds, and when I loaded a new image with systemd it took close to two minutes to boot. And there were quite a lot of problems like that, which is why people were so aggravated when distro admins asked the community for their thoughts on switching to systemd and then changed the distros anyway. This also touches on the perception that the “community” accepted it and moved on – no, systemd was pushed on the community despite numerous problems and critical feedback.

    But we’re here now, systemd has improved, and we can only hope that some day all the broken bits get fixed. Personally I’m still annoyed that it took me almost a week to get static IPs set up on all the NICs for a new firewall because despite the whole “predictable names” thing they still kept moving around depending on if I did a soft or hard reset. Configuring the cards under udev took less than a minute and worked consistently but someone decided it was time to break that I guess.