#1 items should be backups. (Well maybe #2 so that you have something to back up, but don’t delete the source data until the backups are running.)
You need offsite backups, and ideally multiple locations.
#1 items should be backups. (Well maybe #2 so that you have something to back up, but don’t delete the source data until the backups are running.)
You need offsite backups, and ideally multiple locations.
Closed-source software that sends home tons of information about your system without consent. All communication accessible to a VC funded company that has huge pressure to make as much money as possible.
I’ve been doing this from Firefox forever…
But “with audio” is actually a new feature. Previously I was manually sending the audio through my voice channel which worked pretty well but it would be nice to have a separate stream for the streaming audio.
Probably not enough for me to install the spyware though, I’ll keep using Discord via Firefox.
IMHO Arch is actually a great choice. They do have a minimum update frequency you need to maintain (I don’t recall exactly, I think it is somewhere between 1 and 3 months) but if you do, and read the news before updates (and you are usually fine if you don’t, usually the update will just refuse to run until you intervene) things are pretty seamless. I had many arch machines running for >5 years with no issues and no reason to expect that it would change. This is many major version updates for other distros which are often not as seamless.
That being said I am on NixOS now which takes this to the next level, I am running nixos-unstable but thanks to the way NixOS is structured I don’t need to worry about any legacy cruft accumulating from the many years of updates.
And after all of that I don’t think it really matters. I think any major distro you pick, weather stable, release-based or LTS will be fine. They all have some sort of update path these days. (unlike in the past where some distros just recommended a re-install for major updates).
That’s true. And I’m not saying B2 is bad, it is just something that you should be aware of.
Their automatic replication isn’t quite as seamless as GCS or S3 though. For example deletes aren’t replicated so you will need a cleanup strategy. Plus once you 2x or 3x the price B2 isn’t as competitive on price. My point is that it is very easy to compare apples to oranges looking at cloud storage providers and it is important to be aware.
For me B2 is a great fit and I am happy with it, but I don’t wan to mislead peope.
I think it depends on your needs. IIUC their storage is “single location”. Like a very significant natural disaster could take it offline or maybe even lose it. Something like S3 or Google Cloud Storage (depending on which durability you select) is multi-location (as in significantly distinct geographical regions). So still very likely that you will never lose any data, but in the extreme cases potentially you could.
If I was storing my only copy of something it would matter a lot more (although even then you are best to store with multiple providers for social reasons, not just technical) but for a backup it is fine.
I’ve been using Restic to Backblaze B2.
I don’t really trust B2 that much (I think it is mostly a single-DC kind of storage) but it is reasonably priced and easy to use. Plus as long as their failures aren’t correlated with mine it should be fine.
For me the biggest benefit is the ease of applying patches. For example in Nix I can easily take a patch that is either unreleased, or that I wrote myself, and apply it to my systems immediately. I don’t need to wait for it to be released upstream then packaged in my distro. This allows me to fix problems and get new features quickly without needing to mess with my system in any other way (no packages in other directories that need to be cleaned up, no extra steps after updates to remember, no cases where some packages are using different versions and no breaking due to library ABI breaks).
Another benefit that you are pointing at is changing build flags. Often times I want to enable an optional feature that my distro doesn’t enable by default.
Lastly building packages with different micro-architecture optimizations can be beneficial. I don’t do this often but occasionally if I want to run some compute-heavy work it can be nice to get a small performance boost.
I wouldn’t call a nail hard to use because I don’t have a hammer. Yes, you need the right hardware, but there is no difference in the difficulty. But I understand what you are trying to say, just wanted to clarify that it wasn’t hard, just not widespread yet.
which is hard to decode using hardware acceleration
This is a little misleading. There is nothing fundamental about AV1 that makes it hard to decode, support is just not widespread yet (mostly because it is a relatively new codec).
Just to be clear it is probably a good thing that YouTube re-encodes all videos. Videos are a highly complex format and decoders are prone to security vulnerabilities. By transcoding everything (in a controlled sandbox) YouTube takes most of this risk on and makes it highly unlikely that the resulting video that they serve to the general public is able to exploit any bugs in decoders.
Plus YouTube serves videos in a variety of formats and resolutions (and now different bitrates within a resolution). So even if they did try to preserve the original encoding where possible you wouldn’t get it most of the time because there is a better match for your device.
From my experience it doesn’t matter if there is an “Enhanced Bitrate” option or not. My assumption is that around the time that they added this option they dropped the regular 1080p bitrate for all videos. However they likely didn’t eagerly re-encode old videos. So old videos still look OK for “1080p” but newer videos look trash whether or not the “1080p Enhanced Bitrate” option is available.
It may be worth right-clicking the video and choosing “Stats for Nerds” this will show you the video codec being used. For me 1080p is typically VP9 while 4k is usually AV1. Since AV1 is a newer codec it is quite likely that you don’t have hardware decoding support.
I’m pretty sure that YouTube has been compressing videos harder in general. This loosely correlates with their release of the “1080p Enhanced Bitrate” option. But even 4k videos seem to have gotten worse to my eyes.
Watching a higher resolution is definitely a valid strategy. Optimal video compression is very complicated and while compressing at the native resolution is more efficient you can only go so far with less bits. Since the higher resolution versions have higher bitrates they just fundamentally have more data available and will give an overall better picture. If you are worried about possible fuzziness you can try using 4k rather than 1440p as it is a clean doubling of 1080p so you won’t lose any crisp edges.
The use case will change everything. OP is likely using much more memory than you are (especially disk cache usage) so the kernel decided to swap out some data. Maybe you aren’t using as much so it has no need.
To put it another way you want to be using all of your RAM and swap. It becomes a problem if you are frequently reading from Swap. (Writing isn’t usually as much of an issue as they may be proactive writes in case more memory needs to be filled up).
Basically a perfect OS would use RAM + Swap such that the least disk reads need to be issued. This can mean swapping out some idle anonymous memory so that the space can be used as disk cache for some hotter data.
In this screenshot the OS decided that it was better to swap out 3GiB of something to use that space for the disk cache (“Cached” ). It is likely right about this decision (but is not always).
3 GiB does seem a bit high. But if you have lots of processes running that are using memory but are mostly idle it could definitely happen. For example in my case I often have lots of Language Servers running in my IDE, but many of them are for projects that I am not actively looking at so they are just waiting for something to happen. These often take lots of memory and it may make sense to swap these out until they are used again.
There is an option in settings to allow trying all games. By default it only allows it for tested and verified games. But it is a simple checkbox then you can download and run any Windows game.
It used to be common and useful. I did this even after Valve shipped a native Linux TF2 as at the beginning the Wine method gave better results on my hardware. But that time has long passed as Valve has integrated Wine (Proton) and in almost all cases the Linux native builds will outperform Wine (and Steam will let you use the Windows version via Proton if you want even if there is a native Linux build).
So while I suspect that there are still a few people doing this out of momentum, habit or reading old tutorials I am not aware of any good reasons to do this anymore.
It would be nice if there was a shortcut to go “back to previous site”. Because on one hand using back to navigate around map moves is often very convenient, but sometimes I want to go to the site before the map. Having a two-level history with page and site would be super useful.