I’ve enjoyed using proton for my own domain. Adding another 2-3 domains and a second user raises the cost to the point that I just can’t justify. ~$200 up front for two years.
I’ve enjoyed using proton for my own domain. Adding another 2-3 domains and a second user raises the cost to the point that I just can’t justify. ~$200 up front for two years.
That brings a whole new making to impostor syndrome.
and not lose files
Which is exactly why you’d want to run a CoW filesystem with redundancy.
I switched in 1997.
The internet was taking off, and it was built on Linux and un*ces. It was just a lot more fun.
Also, C-programming. M$ had just gotten protected memory in NT4.0, but a lot of applications just didn’t run on NT. It’d take another three years before protected memory hit mainstream with win2k. No novice programmer wants their computer to bluescreen every time they do a tiny little out of bounds error.
I worked at a niche factory some 20 years ago. We had a tape robot with 8 tapes at some 200GB each. It’d do a full backup of everyone’s home directories and mailboxes every week, and incremental backups nightly.
We’d keep the weekly backups on-site in a safe. Once a month I’d do a run to another plant one town over with a full backup.
I guess at most we’d need five tapes. If they still use it, and with modern tapes, it should scale nicely. Today’s LTO-tapes are 18TB. Driving five tapes half an hour would give a nice bandwidth of 50GB/s. The bottleneck would be the write speed to tape at 400MB/s.
I’m sure it’s great and all, but the hassle of having a filesystem that’s not in the kernel is a no-starter for me. Maybe one of those fancy NAS-distros that are based on some *BSD.
Well, snapshots, too. I just consider them to be a special case of de-duplication.
I had an issue when I ran out of space during conversion between RAID profiles a few years back. I didn’t lose any data, but I couldn’t get the array to mount (and stay) read-write.
Been running BTRFS since 2010. Ext2/3/4 before that.
Using it for CoW, de-duplication, compression. My home file server has had a long-lived array of mis-matched devices. Started at 4x2TB, through 6x4TB and now 2x18+4TB. I just move up a size whenever a disk fails.
I don’t think there has been huge issues with incompatible ISAs on ARM. If you’d use NEON extensions, for example, you might have a C-implementation that does the same if the extensions are not available. Most people don’t handwrite such code, but those that do usually go the extra mile. ARM SoCs usually have closed source drivers that cause headaches. As well as no standardized way of booting.
I haven’t delved super-deep into RISC-V just yet, but as I understand these systems will do UEFI, solving the bootloader headache. And yes, there are optional extensions and you can even make your own. But the architecture takes height for implementing an those extensions in software. If you don’t have the gates for your fancy vector instruction, you can provide instructions to replicate the same. It’ll be slower on your hardware, but it’ll be compatible if done right.
I last used Windows NT 4.0
The internet was just starting to get interesting. Windows had software to browse and do e-mail.
Linux had the stuff to power the whole internet. It was just a whole lot more interesting if you wanted to be more than a consumer of the information super-highway.
I wouldn’t say I hate Windows. I’ve had Windows 2.0 through NT 4.0 installed, but it was more of an application that I rarely started because it usually just interfered with my MS-DOS programs. DESQview was a much preferable option, as it had true multitasking (yes, so did NT 4.0 - but it broke a lot of things).
I dual booted DOS and Linux for a couple of years, but DOS box was good enough in 1997 that I rarely had to boot DOS, so I’ve been Linux only for a couple of decades.
Sounds like I should give Windows another try.
Slackware and Red Hat were the two distros in use in the mid 90s.
My local city used proper UNIX, and my university had IRIXworkstations SPARCstations and SunOS servers. We used Linux at my ISP to handle modem pools and web/mail/news servers. In the early 2000s we had Linux labs, and Linux clusters to work on.
Linux on the desktop was a bit painful. There were no modules. Kernels had to fit into main memory. So you’d roll your own kernel with just the drivers you needed. XFree86 was tricky to configure with timings for your CRT monitors. If done wrong, you could break your monitor.
I used FVWM2 and Enlightenment for many years. I miss Enlightenment.
I’ve been trying to convince my boomer wife to try affinity. She works mostly with print, and it seems like a good fit to me.
I’d only use zram if I had no swap device/file.
In my experience zswap performs better, and doesn’t get in the way of hibernation. In fact, most distros enable it by default today, and it doesn’t always work so great with zram.
I guess it all depends on perspective.
I love that it’s free compared to those $10-20k licenses for similar systems.
I love that there are good package managers.
I love that it’s open source.
I hate that it’s GPLv2.
I hate how bloated the kernel is. I’d like it to fit into main memory.
I hate how it’s not POSIX-certified.
It depends on how far down the rabbithole you go.
I switched to Linux 27 years ago. My wife asks me to help her with her Windows computer every now and then, and I can’t really do it for more than a few minutes before my blood pressure is in the risk zone.
I used to play it a lot when it was cool.
I thought it was an ncurses multiplayer tetris-clone.
Resizable BAR was previously cited as a requirement for Intel ARC cards, but I think the drivers today can do without. Sounds like your system might be too old to have that. Might be a soft requirement, as in you’ll see a performance drop if you don’t have it.
That’s funny. I switched from Slackware to Gentoo in 2003 because it was simpler.