the best way to learn is by doing!
the best way to learn is by doing!
I just built my own automation around their official documentation; it’s fantastic.
https://www.wireguard.com/#conceptual-overview
vyatta and vyatta-based (edgerouter, etc) I would say are good enough for the average consumer. If we’re deep enough in the weeds to be arguing the pros and cons of wireguard raw vs talescale; I think we’re certainly passed accepting a budget consumer router as acceptably meeting these and other needs.
Also you don’t need port forwarding and ddns for internal routing. My phone and laptop both have automation in place for switching wireguard profiles based on network SSID. At home, all traffic is routed locally; outside of my network everything goes through ddns/port forwarding.
If you’re really paranoid about it, you could always skip the port-forward route, and set up a wireguard-based mesh yourself using an external vps as a relay. That way you don’t have to open anything directly, and internal traffic still routes when you don’t have an internet connection at home. It’s basically what talescale is, except in this case you control the keys and have better insight into who is using them, and you reverse the authentication paradigm from external to internal.
Talescale proper gives you an external dependency (and a lot of security risk), but the underlying technology (wireguard) does not have the same limitation. You should just deploy wireguard yourself; it’s not as scary as it sounds.
Fail2ban and containers can be tricky, because under the hood, you’ll often have container policies automatically inserting themselves above host policies in iptables. The docker documentation has a good write-up on how to solve it for their implementation
https://docs.docker.com/engine/network/packet-filtering-firewalls/
For your usecase specifically: If you’re using VMs only, you could run it within any VM that is exposing traffic, but for containers you’ll have to run fail2ban on the host itself. I’m not sure how LXC handles this, but I assume it’s probably similar to docker.
The simplest solution would be to just put something between your hypervisor and the Internet physically (a raspberry-pi-based firewall, etc)
I’m sorry but this is just a fundamentally incorrect take on the physics at play here.
You unfortunately can’t ever prevent further breakdown. Every time you run any voltage through any CPU, you are always slowly breaking down gate-oxides. This is a normal, non-thermal failure mode of consumer CPUs. The issue is that this breakdown is non-linear. As the breakdown process increases, it increases resistance inside the die, and as a consequence requires higher minimum voltages to remain stable. That higher voltage accelerates the rate of idle damage, making time disproportionately more damaging the more damaged a chip is.
If you want to read more on these failure modes, I’d recommend the following papers:
L. Shi et al., “Effects of Oxide Electric Field Stress on the Gate Oxide Reliability of Commercial SiC Power MOSFETs,” 2022 IEEE 9th Workshop on Wide Bandgap Power Devices & Applications
Y. Qian et al., “Modeling of Hot Carrier Injection on Gate-Induced Drain Leakage in PDSOI nMOSFET,” 2021 IEEE International Conference on Integrated Circuits, Technologies and Applications
+1 for cmk. Been using it at work for an entire data center + thousands of endpoints and I also use it for my 3 server homelab. It scales beautifully at any size.
The “problem” is that the more you understand the engineering, the less you believe Intel when they say they can fix it in microcode. Without writing an entire essay, the TL/DR is that the instability gets worse over time, and the only way that happens is if applied voltages are breaking down dielectric barriers within the chip. This damage is irreparable, 100% of chips in the wild are irreparably damaging themselves over time.
Even if Intel can slow the bleeding with microcode, they can’t repair the damage, and every chip that has ever ran under the bad code will have a measurably shorter lifespan. For the average gamer, that sometimes hasn’t even been the average warranty period.
Are you maybe thinking of https://distr1.org/ made by the i3 guy?
Generally the lifecycle with this sort of thing is old_thing becomes an alias to new_thing, and eventually old_thing gets dropped as an alias down the line.
It’s still decent advice to learn dnf native calls and to update scripts using yum to those native calls.
I believe that one was patched a while ago
I’m a big fan of tiling window managers like i3 or awesome (awesome wm). Awesome is the one I use. It’s tiling and the entire interface is built from scripts that they encourage you to modify. Steep learning curve but once you get it how you like, there’s nothing like it.
That is usually more incompetence than malice. They write a game that requires different operation on amd vs Nvidia devices and basically write an
If Nvidia: Do x; Else if amd: Do Y; Else: Crash;
The idea being that if the check for amd/Nvidia fails, there must be an issue with the check function. The developers didn’t consider the possibility of a non amd/Nvidia card. This was especially true of old games. There are a lot of 1990s-2000s titles that won’t run on modern cards or modern windows because the developers didn’t program a failure mode of “just try it”
You would expose a single port to multiple vlans, and then bind multiple addresses to that single physical connected interface. Each service would then bind itself to the appropriate address, rather than “*”
You should consider reversing the roles. There’s no reason your homelab cannot be the client, and have your vps be the server. Once the wireguard virtual network exists, network traffic doesn’t really care which was the client and which was the server. Saves you from opening a port to attackers on your home network.
Sorry I should have said “carbons and carbons related qol extensions”
Did you ever get carbons working properly? (As in, mobile and desktop clients of the same user both getting messages and marking as read remotely between them)
I actually had one of these myself. I worked at a college help desk as a student, and I got a call and the guy said “every time I flush the toilet, Xbox live disconnects”
My first thought was that it was a joke, the absurdity of the thing right? I unironically asked if I was being pranked, and he said he knew we wouldn’t believe him so he made a video. Sure enough, he walks into the bathroom, flushes the toilet, and like 5s later his Xbox shows a disconnection message on the TV.
Absolutely dumbfounded, I sent the networking guys up to his room, and like all of these stories, it does have a reasonable explanation. They ran the xbox’s Ethernet cable under a rug that was in front of the bathroom. Every time someone went to the bathroom, they would step on the cable, and the Xbox would disconnect. The timeout was 30s or so, just long enough that they’d pee or flush the toilet or whatever before they noticed the disconnection.
I run ubuntu’s server base headless install with a self-curated minimal set of gui packages on top of that (X11, awesome, pulse, thunar) but there’s no reason you couldn’t install kde with wayland. Building the system yourself gets you really far in the anti-bloatware dept, and the breadth of wiki/google/gpt based around Debian/Ubuntu means you can figure just about any issues out. I do this on a ~$200 eBay random old Dell + a 3050 6gb (slot power only).
For lighter gaming I’ll use the Ubuntu PC directly, but for anything heavier I have a win11 PC in the basement that has no other task than to pipe steam over sunshine/moonlight
It is the best of both worlds.