• 0 Posts
  • 11 Comments
Joined 2 years ago
cake
Cake day: July 8th, 2023

help-circle
  • It depends how you define it. I first installed Slackware at work on a retired IBM PS/2 in '94 or '95, because somebody was working on MicroChannel bus support. (That never materialized.) Later, we checked out Novell Linux Desktop, maybe Debian, too. At a later job, we had some Red Hat workstations, version 5 or 6, and I had Yellow Dog Linux on an old Power Mac.

    At home, I didn’t switch to Linux until Ubuntu Breezy Badger. It was glorious to install it on a laptop, and have all of the ACPI features just work. I had been running FreeBSD for several years, NetBSD on an old workstation before that, and Geek Gadgets (a library for compiling Unix programs on Amiga OS) before that.


  • One that Linux should’ve had 30 years ago is a standard, fully-featured dynamic library system. Its shared libraries are more akin to static libraries, just linked at runtime by ld.so instead of ld. That means that executables are tied to particular versions of shared libraries, and all of them must be present for the executable to load, leading to the dependecy hell that package managers were developed, in part, to address. The dynamically-loaded libraries that exist are generally non-standard plug-in systems.

    A proper dynamic library system (like in Darwin) would allow libraries to declare what API level they’re backwards-compatible with, so new versions don’t necessarily break old executables. (It would ensure ABI compatibility, of course.) It would also allow processes to start running even if libraries declared by the program as optional weren’t present, allowing programs to drop certain features gracefully, so we wouldn’t need different executable versions of the same programs with different library support compiled in. If it were standard, compilers could more easily provide integrated language support for the system, too.

    Dependency hell was one of the main obstacles to packaging Linux applications for years, until Flatpak, Snap, etc. came along to brute-force away the issue by just piling everything the application needs into a giant blob.


  • I feel like there’s a lot of information missing here. VLANs operate at OSI layer 2, and Immich connects to its ML server via IP in layer 3. It could talk to a remote server in Ecuador over the Internet, so the layer 2 configuration is irrelevant.

    What you have is an issue of routing IP packets between subnets. You just need to set up a rule on your router to allow the Immich server on the Internet-facing IP subnet to connect to the correct port(s) for the ML server on the private subnet. Or maybe use the router’s port-forwarding feature. Lacking further information about the setup, I have to be vague here. In any case, it’s conceptually the same as punching a hole in the firewall to let IP packets from an Immich server in Ecuador get to the ML server on your private subnet, except that the server is not in Ecuador.



  • Just spitballing here, but if I read this correctly, you pulled the Windows drive, installed Mint, and then put the Windows drive back in alongside the Mint drive? If so, that might be the issue.

    UEFI firmware looks for a special EFI partition on the boot drive, and loads the operating system’s own bootloader from there. The Windows drive has one. When you pulled the Windows drive to install Mint on another drive, Mint had to create an EFI partition on its disk to store its bootloader.

    Then, when you put the Windows disk back in, there were two EFI partitions. Perhaps the UEFI firmware was looking for the Windows bootloader in the EFI partition on the Mint disk. It would of course not find it there. In my experience, Windows recovery is utterly useless in fixing EFI boot issues.

    It’s possible to rebuild the Windows EFI bootloader files manually, but since you don’t mind blowing away both OS installs, I’d say just install Mint on the second drive while both of them are installed in the system, so the installer puts the Mint bootloader on the same EFI partition as the Windows one. With the advent of EFI, Windows will still sometimes blow away a Linux bootloader, but Linux installers are very good at installing alongside Windows. If it does get stuffed up, there’s a utility called Boot-Repair, that you can put on a USB disk, that works a lot better than Windows recovery.



  • This is madness, but since this is a hobby project and not a production server, there is a way:

    • Shrink the filesystems on the existing disks to free up as much space as possible, and shrink their partitions.
    • Add a new partition to each of the three disks, and make a RAID5 volume from those partitions.
    • Move as many files as possible to the new RAID5 volume to free up space in the old filesystems.
    • Shrink the old filesystems/partitions again.
    • Expand each RAID component partition one at a time by removing it from the array, resizing it into the empty space, and re-adding it to the array, giving plenty of time for the array to rebuild.
    • Move files, shrink the old partitions, and expand the new array partitions as many times as needed until all the files are moved.

    This could take several days to accomplish, because of the RAID5 rebuild times. The less free space, the more iterations and the longer it will take.



  • This just sounds like a bad idea, a solution in search of a problem. Sure, sudo is a setuid binary, but it’s a fairly simple program, and at some point, you have to trust the code. It’s also a very fundamental piece of the system that you want to always work, even (especially!) when other things get borked. The brief description of run0 already has too many potential points of failure.