The comment you replied to is a direct reply to the comment you linked - I don’t think it was intentional, but if it was, then I’d like to say it’s not a very helpful reply as OP already read it.
The comment you replied to is a direct reply to the comment you linked - I don’t think it was intentional, but if it was, then I’d like to say it’s not a very helpful reply as OP already read it.
someone just plain lying about what OS they’re using in order to break fingerprinting.
The idea with avoiding fingerprinting is to look like whatever the biggest group of users looks like, because that’s who you share the fingerprint with. If you use an uncommon value for something, you make fingerprinting easier.
That’s one of the reasons why for example Vivaldi on Linux sets its user agent to match the latest version Chrome on Windows.
Salt from the seawater
In my very limited experience with my 5400rpm SMR WD disk, it’s perfectly capable of writing at over 100 MB/s until its cache runs out, then it pretty much dies until it has time to properly write the data, rinse and repeat.
40 MB/s sustained is weird (but maybe it’s just a different firmware? I think my disk was able to actually sustain 60 MB/s for a few hours when I limited the write speed, 40 could be a conservative setting that doesn’t even slowly fill the cache)
Fair enough. I misunderstood, my bad.
Then what’s the meaning of this whole part?
On non-corpo linux syslog can be disabled if you want, though I’d prefer to just symlink/mount /var/log to a memory filesystem instead.
Is it just a random tidbit that could be replaced with a blueberry muffin recipe without any change of meaning of the whole comment? Because it sure won’t help OP at all with their Arch-specific question, so it’s either that, or it provides contrast to the “corpo Linux”, which is how I interpreted it.
And here’s the remaining part of your comment I left out, just to make sure people won’t lose the context between two three sentence long comments (for those without any attention span, it comes before the previous quoted part):
If you’re on arch you use redhat’s garbage.
On non-corpo linux syslog can be disabled
systemctl disable --now systemd-journald
I’d prefer to just symlink/mount /var/log to a memory filesystem instead
Set Storage=volatile
in /etc/systemd/journald.conf
Your mileage may vary - your experience might be different for one reason or another
How is it open source?
How is it not? Open source doesn’t mean you have to accept other people’s code. And it is perfectly valid to only dump code for every release, even some GNU projects (like GCC) used to work that way. Hell, there’s even a book about the two different approaches in open source.
So whatever benefit you were hoping to get from Nvidia’s kernel modules being open source probably is not there.
It allowed the actual in-tree nouveau kernel module to take the code for interacting with the GSP firmware to allow changing GPU clock speed - in other words no more being stuck on the lowest possible frequency like with the GTX 10 series cards. Seems like a pretty decent benefit to me.
Vista’s problem was just the terrible third party drivers and the fact that it was preinstalled on machines it had no business running on. 7 didn’t improve much on it (except fixing the UAC prompt so that it no longer made you feel like you’re using Linux with misconfigured sudo timeout), but it had the benefit of already having working drivers from Vista and proper hardware capable of running Vista/7.
Zig didn’t come to my mind when I was writing my comment and I agree that it’s probably a decent option (the only issue I can think of is its somewhat small community, but that’s not a technical issue with the language).
My argument against Go and Java is garbage collection - even if Java’s infamous GC pause can apparently be worked around with a specialized JVM, I’m pretty sure it still comes at the cost of higher memory usage and wasted CPU cycles compared to some kind of reference counting or Rust’s ownership mechanism (not sure about the proper term for that). And higher memory usage is definitely not something I want to see in my browser, they’re hungry enough as is.
Why not just say Rust? There isn’t really anything else that would provide good enough performance for a browser engine with modern heavy webpages while also fixing some major pain point of C/C++
They’re not doing a recall, but that doesn’t mean they won’t somehow compensate big OEMs for their warranty issues.
Probably a bit of a TL:DR of the other answer, but the short answer is: the execute bit has a different meaning for directories - it allows you to keep going down the filesystem tree (open a file or another directory in the directory). The read bit only allows you to see the names of the files in the directory (and maybe some other metadata), but you cannot open them without x bit.
Fun fact, it makes sense to have a directory with --x or -wx permissions - you can access the files inside if you already know their names.
Edit: not a short answer, apparently
You can now turn on the “autoscrolling” feature of the Libinput driver, which lets you scroll on any scrollable view by holding down the middle button of your mouse and moving the whole mouse
Am I crazy, or did this used to be a feature? And not just in Firefox
It’s a Windows feature that never really made it to Linux. I used to miss it but honestly, middle click paste feels way more useful to me now
Maybe Redis/Redict? The development on that seems pretty dead.
Yes, that’s exactly the problem - there’s nothing wrong with the encryption used, but it’s IMHO incorrect to call it time-based when it’s “work-based” and it just so happens that the specific computer doing the encryption works at a given speed.
I don’t call my laptop’s FDE time-based encryption just because I picked an encryption that takes it 10 seconds to decrypt the key.
def generate_proof_of_work_key(initial_key, time_seconds):
proof_key = initial_key
end_time = time.time() + time_seconds
iterations = 0
while time.time() < end_time:
proof_key = scrypt(proof_key, salt=b'', N=SCRYPT_N, r=SCRYPT_R, p=SCRYPT_P, key_len=SCRYPT_KEY_LEN)
iterations += 1
print(f"Proof-of-work iterations (save this): {iterations}")
return proof_key
def generate_proof_of_work_key_decrypt(initial_key, iterations):
proof_key = initial_key
for _ in range(iterations):
proof_key = scrypt(proof_key, salt=b'', N=SCRYPT_N, r=SCRYPT_R, p=SCRYPT_P, key_len=SCRYPT_KEY_LEN)
return proof_key
The first function is used during the encryption process, and the while loop clearly runs until the specified time duration has elapsed. So encryption would take 5 days no matter how fast your computer is, and to decrypt it, you’d have to do the same number of iterations your computer managed to do in that time. So if you do the decryption on the same computer, you should get a similar time, but if you use a different computer that is faster at doing these operations, it will decrypt it faster.
They probably fixed all the bugs they considered essential, and the rest is just nice to have fixes that can be moved to the next cycle if necessary (and they still have a week to work on them before release, although they might be careful not to introduce severe bugs now).
The general idea with this approach is that it doesn’t make sense to block a release on a few bugs worked on by only a subset of available developers and having the rest idle - the project can be finished faster by moving the remaining tasks over to the next release and accepting the bugs in the meantime.