A person with way too many hobbies, but I still continue to learn new things.

  • 1 Post
  • 79 Comments
Joined 2 years ago
cake
Cake day: June 7th, 2023

help-circle

  • If you want stability, you probably can’t beat Debian, and you should be fairly used to the backend by now. I suspect the stylus use is just going to be figuring out what package provided your current access to it.

    Before you wipe the laptop, I would recommend finding a command to list all the installed packages, then at least you’ll have a reference to what was in place before. And if possible, maybe grab a backup of the /etc folder (or whatever might still be accessible) so you can reference the current configs on various packages to recreate whatever doesn’t work by default.

    There are a number of lightweight desktops you can choose from. I personally like Mate, but maybe you can play around with others on the new system and purge the ones you don’t like. And while you’re swapping drives, check the memory slots, maybe you can drop another 8GB stick in there to give the whole system a boost.



  • Keep an eye out for people trashing perfectly good desktop machines because Windows 10 is being retired.

    If you want a server that “does it all” then you would need to get the most decked-out top of the line server available… Obviously that is unrealistic, so as others have mentioned, knowing WHAT you want to run is required to even begin to make a guess at what you will need.

    Meanwhile here’s what I suggest – Grab any desktop machine you can find to get yourself started. Load up an OS, and start adding services. Maybe you want to run a personal web server, a file server, or something more extensive like Nextcloud? Get those things installed, and see how it runs. At some point you will start seeing performance issues, and this tells you when it’s time to upgrade to something with more capability. You may simply need more memory or a better CPU, in which case you can get the parts, or you may need to really step up to something with dual-CPU or internal RAID. You might also consider splitting services between multiple desktop machines, for instance having one dedicated NAS and another running Nextcloud. Your personal setup will dictate what works best for you, but the best way to learn these things is to just dive in with whatever hardware you can get ahold of (especially when it’s free), and use that as your baseline for any upgrades.


  • My current desktop came from a co-worker, but you can also put the word out to family and friends that you’re interested in their old machines. Most people are happy to give them away because otherwise it costs them money to dispose of electronics. If nothing else, you could post on Nextdoor or a local Facebook page that you’re looking for a Win10 machine that would otherwise be trashed.

    Older machines also mean dirt-cheap upgrades. The desktop I have came with a Celeron cpu. I dropped in an i7 for $10 from ebay, and recently upgraded it to 24GB of ram with sticks I had pulled from other free systems. When you switch to Linux you’re not wasting horsepower on Microsoft spyware crap, so this machine does just fine for my needs (although I’m also not trying to play games).




  • Agreed on Debian stable. Long ago I tried running servers under Ubuntu… that was all fine until the morning I woke up to find all of the servers offline because a security update had destroyed the network card drivers. Debian has been rock-solid for me for years and buying “commercial support” basically means paying someone else to do google searches for you.

    I don’t know if I’ve ever tried flatpaks, I thought they basically had the same problems as snaps?


  • I’m not sure about other distros, I’ve just heard a lot of complaints about snaps under Ubuntu. Cura was the snap I tried on my system that constantly crashed until I found a .deb package. Now it runs perfectly fine without sucking up a ton of system memory. Thunderbird is managed directly by debian, and firefox-esr is provided by a Mozilla repo so they all get installed directly instead of through 3rd-party software (although I think I tried upgrading Firefox to a snap version once and it was equally unstable). Now I just avoid anything that doesn’t have a direct installer.


  • That’s what I was thinking too… If they’re running Ubuntu then they’re probably installing packages through snaps, and that’s always been the worst experience for me. Those apps lag down my whole system, crash or lock up, and generally are unusable. I run Debian but have run into apps that wanted me to use a snap install. One package I managed to find a direct installer that is rock-solid in comparison to the snap version, and the rest of the programs I abandoned.

    Firefox (since it was mentioned) is one of those things I believe Ubuntu installs as a snap, despite there being a perfectly usable .deb package. I applaud the effort behind snap and others to make a universal installation system, but it is so not there yet and shouldn’t be the default of any distro.


  • But why doesn’t it ever empty the swap space? I’ve been using vm.swappiness=10 and I’ve tried vm.vfs_cache_pressure at 100 and 50. Checking ps I’m not seeing any services that would be idling in the background, so I’m not sure why the system thought it needed to put anything in swap. (And FWIW, I run two servers with identical services that I load balance to, but the other machine has barely used any swap space – which adds to my confusion about the differences).

    Why would I want to reduce the amount of memory in the server? Isn’t all that cache memory being used to help things run smoother and reduce drive I/O?


  • And how does cache space figure in to this? I have a server with 64GB of RAM, of which 46GB is being used by system cache, but I only have 450MB of free memory and 140MB of free swap. The only ‘volatile’ service I have running is slapd which can run in bursts of activity, otherwise the only thing of consequence running is webmin and some VMs which collectively can use up to 24GB (though they actually use about half that) but there’s no reason those should hit swap space. I just don’t get why the swap space is being run dry here.




  • Shdwdrgn@mander.xyztoLinux@lemmy.mlHow long has your PC been on for?
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    6 months ago

    22:57:20 up 70 days, 16:04, 21 users, load average: 1.10, 1.14, 1.02

    Honestly if you were expecting a drive failure in three years, you probably have some other problem. The SSD in my desktop is clocking 7.3 years and I never shut down my machines except to reboot. On my servers, I have run used HDDs from ebay for up to ten years (only retired for upgrades). My NAS is currently running a mixture of used drives from Ebay and some refurbs from Amazon, and I don’t anticipate seeing any issues for at least a few more years.


  • More drives also equals larger power consumption so you would need a larger battery backup.

    It also means more components prone to failure which increases your chance of losing data. More drives means more moving parts and electrical connections including data and power cables, backplanes, and generated heat that you need to cool down.

    I’d be more concerned over how many failures you’re seeing that makes you think smaller drives would be the better option? I have historically used old drives from ebay or manufacturer refurbs, and even the worst of those have been reliable enough to only have to replace drives once every year or two. With RAID6 or raidz2 you should be plenty secure during a rebuild to prevent data loss. I wouldn’t consider using a lot of little drives unless it’s the only option I had or if someone gave them away for free.



  • Are you sure about that? Ever hear about this supposed predictable network names in recent linux versions? Yeah those can change too. I was trying to set up a new firewall with two internal NICs plus a 4-port card, and they kept moving around. I finally figured out that if I cold-booted the NICs would come up in one order, and if I warm-booted they would come up in a completely different order (like the ports on the card would reverse which order they were detected). This was completely the fault of systemd because when I installed an older linux and used udev to map the ports, it worked exactly as predicted. These days I trust nothing.


  • Shdwdrgn@mander.xyztoSelfhosted@lemmy.worldHelp with ZFS Array
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    8 months ago

    OP – if your array is in good condition (and it looks like it is) you have an option to replace drives one by one, but this will take some time (probably over a period of days). The idea is to remove a disk from the pool by its old name, then re-add the disk under the corrected name, wait for the pool to rebuild, then do the process again with the next drive. Double-check, but I think this is the proper procedure…

    zpool offline poolname /dev/nvme1n1p1

    zpool replace poolname /dev/nvme1n1p1 /dev/disk/by-id/drivename

    Check zpool status to confirm when the drive is done rebuilding under the new name, then move on to the next drive. This is the process I use when replacing a failed drive in a pool, and since that one drive is technically in a failed state right now, this same process should work for you to transfer over to the safe names. Keep in mind that this will probably put a lot of strain on your drives since the contents have to be rebuilt (although there is a small possibility zfs may recognize the drive contents and just start working immediately?), so be prepared in case a drive does actually fail during the process.


  • That is definitely true of zfs as well. In fact I have never seen a guide which suggests anything other than using the names found under /dev/disk/by-id/ or /dev/disk/by-id/uuid and that is to prevent this very problem. If the proper convention is used then you can plug the drives in through any available interface, in any order, and zfs will easily re-assemble the pool at boot.

    So now this begs the question… is proxmox using some insane configuration to create drive clusters using the name they happen to boot up with???