Buying a domain. There might be some free services that, similar to DuckDNS in the beginning, work reliably for now. But IMHO they are not worth the potential headaches.
Buying a domain. There might be some free services that, similar to DuckDNS in the beginning, work reliably for now. But IMHO they are not worth the potential headaches.
DuckDNS pretty often has problems and fails to propagate properly. It’s not very good, especially with frequent IP changes.
Windows, as any operating system, is best run in a context most useful to the user and appropriate for the user’s technical level.
I’d appreciate it very much!
Great suggestion to secure the backups themselfes, but I’m more concerned about the impact an attacker on my network might have on the external network and vice versa.
That’d be the gold standard. Unfortunately, the external network utilizes infrastructure that doesn’t support specifying firewall rules on the existing separate VLAN, so all rules would have to be applied on the Pi itself or on yet another device between, which is something I’d like to avoid. Great general advice, though!
While this is a great approach for any business hosting mission critical or user facing ressources, it is WAY overkill for a basic selfhosted setup involving family and friends.
For this to make sense, you need to have access to 3 different physical locations with their own ISPs or rent 3 different VPS.
Assuming one would use only 1 data drive + an equal parity drive, now we’re talking about 6 drives with the total usable capacity of one. If one decides to use fewer drives and link your nodes to one or two data drives (remotely), I/O and latency becomes an issue and you effectively introduced more points of failure than before.
Not even talking about the massive increase in initial and running costs as well as administrive headaches, this isn’t worth it for basically anyone.
I’ve been tempted by Tailscale a few times before, but I don’t want to depend on their proprietary clients and control server. The latter could be solved by selfhosting Headscale, but at this point I figure that going for a basic Wireguard setup is probably easier to maintain.
I’d like to have a look at your rules setup, I’m especially curious if/how you approached the event of the commercial VPN Wireguard tunnel(s) on your exit node(s) going down, which depending on the setup may send requests meant to go through the commercial VPN through your VPS exit node.
Personally, I ended up with two Wireguard containers in the target LAN, a wireguard-server and a **wireguard-client **container.
They both share a docker network with a specific subnet {DOCKER_SUBNET} and wireguard-client has a static IP {WG_CLIENT_IP} in that subnet.
The wireguard-client has a slightly altered standard config to establish a tunnel to an external endpoint, a commercial VPN in this case:
[Interface]
PrivateKey = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Address = XXXXXXXXXXXXXXXXXXX
PostUp = iptables -t nat -A POSTROUTING -o wg+ -j MASQUERADE
PreDown = iptables -t nat -D POSTROUTING -o wg+ -j MASQUERADE
PostUp = iptables -I OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -I OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT
PreDown = iptables -D OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -D OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT
[Peer]
PublicKey = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
AllowedIPs = 0.0.0.0/0,::0/0
Endpoint = XXXXXXXXXXXXXXXXXXXX
where
PostUp = iptables -t nat -A POSTROUTING -o wg+ -j MASQUERADE
PreDown = iptables -t nat -D POSTROUTING -o wg+ -j MASQUERADE
are responsible for properly routing traffic coming in from outside the container and
PostUp = iptables -I OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -I OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT
PreDown = iptables -D OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -D OUTPUT ! -o %i -m mark ! --mark $(wg show %i fwmark) -m addrtype ! --dst-type LOCAL -j REJECT
is your standard kill-switch meant to block traffic going out of any network interface except the tunnel interface in the event of the tunnel going down.
The wireguard-server container has these PostUPs and -Downs:
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
default rules that come with the template and allow for routing packets through the server tunnel
PostUp = wg set wg0 fwmark 51820
the traffic out of the tunnel interface get marked
PostUp = ip -4 route add 0.0.0.0/0 via {WG_CLIENT_IP} table 51820
add a rule to routing table 51820 for routing all packets through the wireguard-client container
PostUp = ip -4 rule add not fwmark 51820 table 51820
packets not marked should use routing table 51820
PostUp = ip -4 rule add table main suppress_prefixlength 0
respect manual rules added to main routing table
PostUp = ip route add {LAN_SUBNET} via {DOCKER_SUBNET_GATEWAY_IP} dev eth0
route packages with a destination in {LAN_SUBNET} to the actual {LAN_SUBNET} of the host
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE; ip route del {LAN_SUBNET} via {DOCKER_SUBNET_GATEWAY_IP} dev eth0
delete those rules after the tunnel goes down
PostUp = iptables -I OUTPUT ! -o %i -m mark ! --mark 0xca6c -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -I OUTPUT ! -o %i -m mark ! --mark 0xca6c -m addrtype ! --dst-type LOCAL -j REJECT
PreDown = iptables -D OUTPUT ! -o %i -m mark ! --mark 0xca6c -m addrtype ! --dst-type LOCAL -j REJECT && ip6tables -D OUTPUT ! -o %i -m mark ! --mark 0xca6c -m addrtype ! --dst-type LOCAL -j REJECT
Basically the same kill-switch as in wireguard-client, but with the mark manually substituted since the command it relied on didn’t work in my server container for some reason and AFAIK the mark actually doesn’t change.
Now do I actually need the kill-switch in wireguard-server? Is the kill-switch in wireguard-client sufficient? I’m not even sure anymore.
Oh I’m fully aware. I personally don’t care, but one could add a capable VPS and deploy the Wireguard Host Container + two Client Containers, one for the LAN and one for the commercial VPN (like so), if the internet connection of the LAN in question isn’t sufficient.
Oh, neat! Never noticed that option in the Wireguard app before. That’s very helpful already. Regarding your opnsense setup:
I’ve dabbled in some (simple) routing before, but I’m far from anything one could call competent in that regard and even if I’d read up properly before writing my own routes/rules, I’d probably still wouldn’t trust that I hadn’t forgotten something to e.g. prevent IP/DNS leaks.
I’m mainly relying on a Docker and was hoping for pointers on how to configure a Wireguard host container to route only internet traffic through another Wireguard Client container.
I found this example, which is pretty close to my ideal setup. I’ll read up on that.
Great synopsis!
The cool thing about GrapheneOS: It provides basically all the comforts and usability as any Android (stock) ROM minus some compatibility issues with a portion of Google Apps and services (Google Pay doesn’t and probably will never work, for example) while providing state-of-the-art security and privacy if you choose to utilize those features. A modern Pixel with up-to-date GrapheneOS, configured the right way, is literally the most secure and private smartphone you can get today.
While this is certainly a cool concept, local voice assistants like this are currently a novelty. Cool to play around with, though!
You can expect around 5 seconds processing time to start generating the response to a basic question on a very basic model like Llama 3 8B.
For context, using Moondream2 (as recommended) on a RasPi 5, it takes around 50 seconds to process an image taken by the Camera and start generating a description.
The problem with Nix and its forks, imho, is that it takes a lot of work, patience, time and the willingness to learn yet another complex workflow with all of its shortcomings, bits and quirks to transition from something tried, tested and stable to something very volatile with no guaranteed widespread adoption.
The whole leadership drama and the resulting forks, which may or may not want to achieve feature parity or spin off into their own thing, certainly doesn’t make the investment seem more attractive, either.
I, too, like the concept of Nix very, very much. But apart from some experimental VMs, I’m not touching it on anything resembling a production environment until it looks to like it’s here to stay (predictable).
I simply can’t wrap my head around the thought process behind launching a clusterfuck like this. Y Combinator probably didn’t do their due diligence and simply rode the fading AI Bubble, so I can at least understand how the funding might have been approved.
But actively leaving your $250,000+/year job to team up with some questionable choices to basically fork two OS projects, change the discord links and generate an illegal licence for that shit show, all while proudly stating, publicly, “dawg i chatgpt’d the license, anyone is free to use our app for free for whatever they want. if there’s a problem with the license just lmk i’ll change it. we busy building rn can’t be bothered with legal” when they are made aware of the fact.
This is absolutely insane, sounds like someone was about to get fired and decided to use some personal relations and fresh graduates to somehow successfully cash in one last time with absolutely no regard of even the basics. Pretty wild that those guys even managed to figure out how to found a Startup. Probably asked ChatGPT for instructions there, as well.