𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍

       🅸 🅰🅼 🆃🅷🅴 🅻🅰🆆. 
 𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍 𝖋𝖊𝖆𝖙𝖍𝖊𝖗𝖘𝖙𝖔𝖓𝖊𝖍𝖆𝖚𝖌𝖍 
  • 2 Posts
  • 55 Comments
Joined 2 years ago
cake
Cake day: August 26th, 2022

help-circle

  • I started with rootless podman when I set up All My Things, and I have never had an issue with either maintaining or running it. Most Docker instructions are transposable, except that podman doesn’t assume everything lives as dockerhub and you always have to specify the host. I’ve run into a couple of edge cases where arguments are not 1:1 and I’ve had to dig to figure out what the argument is on podman. I don’t know if I’m actually more secure, but I feel more secure, and I really like not having the docker service running as root in the background. All in all, I think my experience with rootless podman has been better than my experience with docker, but at this point, I’ve had far more experience with podman.

    Podman-compose gives me indigestion, but docker-compose didn’t exist or wasn’t yet common back when I used docker; and by the time I was setting up a homelab, I’d already settled on podman. So I just don’t use it most of the time, and wire things up by hand when necessary. Again, I don’t know whether that’s just me, or if podman-compose is more flaky than docker-compose. Podman-compose is certainly much younger and less battle-tested. So is podman but, as I said, I’ve been happy with it.

    I really like running containers as separate users without that daemon - I can’t even remember what about the daemon was causing me grief; I think it may have been the fact that it was always running and consuming resources, even when I wasn’t running a container, which isn’t a consideration for a homelab. However, I’d rather deeply know one tool than kind of know two that do the same thing, and since I run containers in several different situations, using podman everywhere allows me to exploit the intimacy I wouldn’t have if I were using docker in some places and podman in others.



  • They can’t, tho. There are two reasons for this.

    Geolocating with cell towers requires trilateration, and needs special hardware on the cell towers. Companies used to install this hardware for emergency services, but stopped doing so as soon as they legally could as it’s very expensive. Cell towers can’t do triangulation by themselves as it requires even more expensive hardware to measure angles; trilateration doesn’t work without special equipment because wave propegation delays between the cellular antenna and the computers recording the signal are big enough to utterly throw off any estimate.

    An additional factor in making trilateration (or even triangulation, in rural cases where they did sometimes install triangulation antenna arrays on the towers) is that, since the UMTS standard, cell chips work really hard to minimize their radio signal strength. They find the closest antenna and then reduce their power until they can just barely talk to the tower; and except in certain cases they only talk to one tower at a time. This means that, at any given point, only one tower is responsible for handling traffic for the phone, and for triangulation you need 3. In addition to saving battery power, it saves the cell companies money, because of traffic congestion: a single tower can only handle so much traffic, and they have to put in more antennas and computers if the mobile density gets too high.

    The reason phones can use cellular signal to improve accuracy is because each phone can do its own triangulation, although it’s still not great and can be impossible because of power attenuation (being able to see only one tower - or maybe two - at a time); this is why Google and Apple use WiFi signals to improve accuracy, and why in-phone triangulation isn’t good enough: in any sufficiently dense urban or suburban environment, the combined informal of all the WiFi routers the phone can see, and the cell towers it can hear, can be enough to give a good, accurate position without having to turn on the GPS chip, obtain a satellite fix (which may be impossible indoors) and suck down power. But this is all done inside and from the phone - this isn’t something cell carriers can do themselves most of the time. Your phone has to send its location out somewhere.

    TL;DR: Cell carriers usually can’t locate you with any real accuracy, without the help of your phone actively reporting its calculated location. This is largely because it’s very expensive for carriers to install the necessary hardware to get any accuracy of more than hundreds of meters; they are loath to spend that money, and legislation requiring them to do so no longer exists, or is no longer enforced.

    Source: me. I worked for several years in a company that made all of the expensive equipment - hardware and software - and sold it to The Big Three carriers in the US. We also paid lobbyists to ensure that there were laws requiring cell providers to be able to locate phones for emergency services. We sent a bunch of our people and equipment to NYC on 9/11 and helped locate phones. I have no doubt law enforcement also used the capability, but that was between the cops and the cell providers. I know companies stopped doing this because we owned all of the patents on the technology and ruthlessly and successfully prosecuted the only one or two competitors in the market, and yet we still were going out of business at the end as, one by one, cell companies found ways to argue out of buying, installing, and maintaining all of this equipment. In the end, the competitors we couldn’t beat were Google and Apple, and the cell phones themselves.





  • I’m 100% with you. I want a Light Phone with a changeable battery and the ability to run 4 non-standard phone apps that I need to have mobile: OSMAnd, Home Assistant, Gadget Bridge, and Jami. Assuming it has a phone, calculator, calendar, notes, and address book - the bare-bones phone functions - everything else I use on my phone is literally something I can do probably more easily on my laptop, and is nothing I need to be able to do while out and about. If it did that, I would probably never upgrade; my upgrade cycle is on the order of every 4 years or so as is, but if you took off all of the other crap, I’d use my phone less and upgrade less often.

    The main issue with phones like the Light Phone is that there are those apps that need to be mobile, and they often aren’t available there.


  • since all apps are designed to run well on budget phones from 5 years ago, there’s no reason to upgrade.

    5 years, maybe, but any more is stretching it. And not getting system upgrades anymore is problematic. Unless you own a particular model of phone, de-Googled Android can be hard to come by.

    For example, I have a 7-year old Pixel C. By the time Google stopped using system updates for it, I wasn’t wanting them as every release made the device slower and more unstable. After some effort, I was finally able to install a version of Lineage, which itself has problems including no updates in years. There’s a lot of software that is incompatible with my device, both from Aurora and FDroid.

    Android isn’t Linux; Google doesn’t care about maintaining backward compatability on old devices, much less performance, and there’s no army of engineers making sure it is because there’s a served running in walled-up closet no one can find.

    Google deprecates features and ABIs in Android, apps update and suddenly aren’t backwards compatible.

    5 years, maybe. The entire industry is addicted to users upgrading their phones, and everyone gets a piece of that pie. There’s no actors, except perhaps app developers, who have any interest in keeping old phones running. Telecoms upgrade their wireless network - the internet connection in my 8 y/o car, and half its navigation features, died the day AT&T decided to stop supporting 3G; Phone makers make no money if you don’t buy new phones; and maintaining backwards compatibility costs Google money which they’d rather siphon off to shareholders.




  • It’s listed as the “profile” in the screenshots you’re listing, but that’s the ruleset you’re altering.

    I used nft or iptables, and my interaction with ufw has been sparse, and mostly through the UI, because the rulesets the GUI generates are incomprehensible. There should be a command in ufw to report which profile is active.

    I’m going to guess this is a dead-end, since you’ve been using the CLI and I have to believe it uses the active profile by default, unless you tell it otherwise. However, in the GUI, if you edit rules in a profile it doesn’t automatically apply to your current ruleset. And if you alter your current ruleset, it doesn’t automatically persist it. So, even if you change a rule on the Home profile, and the Home profile is active, it doesn’t automatically get applied to the running ruleset; you have to take another action to apply it.

    Mind you, that’s all through the UI; I’ve never used the ufw command line, so this is (again) probably a red herring. I find ufw to be obtuse at best, because of the Byzantine rulesets it generates.




  • I use it for everything, but then, I wrote it. All of the desktop secret service tools have desktop dependencies (Gnome’s uses Gnome libraries, KDE’s pulls some KDE libraries) and run through DBUS; since I don’t use a DE, it’s a fair bit of unnecessary bloat. And I don’t like GUI apps that just hang around in the background consuming resources. I open KeePassXC when I need to make changes to the DB, and then I shut it down. Otherwise, it hangs out in my task bar, distracting me.

    Rook is for people who want to run on headless systems, or want to minimize resources usage, or don’t use a desktop environment (such as Gnome or KDE), or don’t run DBUS, or don’t run systemd. It’s for people who don’t want a bunch of applications running in the background in their task bar. KeePassXC providing a secret service is great, but it’s overkill if that’s most of what it’s providing for you, most of the time.

    I don’t think took is for everyone, or even for most people. It’s for people who like to live mostly in the command line, or even in VTs.


  • KeePassXC can’t be run in headless mode, and the GUI is tightly coupled to the app. You have to have all of X installed, and have a display running, to run it.

    Here’s the runtime dependencies of KeePassXC:

    linux-vdso.so.1
    libQt5Svg.so.5
    libqrencode.so.4
    libQt5Concurrent.so.5
    libpcsclite.so.1
    libargon2.so.1
    libQt5Network.so.5
    libQt5Widgets.so.5
    libbotan-3.so.5
    libz.so.1
    libminizip.so.1
    libQt5DBus.so.5
    libusb-1.0.so.0
    libQt5X11Extras.so.5
    libQt5Gui.so.5
    libQt5Core.so.5
    libX11.so.6
    libstdc++.so.6
    libm.so.6
    libgcc_s.so.1
    libc.so.6
    /lib64/ld-linux-x86-64.so.2
    libgssapi_krb5.so.2
    libproxy.so.1
    libssl.so.3
    libcrypto.so.3
    libbz2.so.1.0
    liblzma.so.5
    libsqlite3.so.0
    libdbus-1.so.3
    libudev.so.1
    libGL.so.1
    libpng16.so.16
    libharfbuzz.so.0
    libmd4c.so.0
    libsystemd.so.0
    libdouble-conversion.so.3
    libicui18n.so.75
    libicuuc.so.75
    libpcre2-16.so.0
    libzstd.so.1
    libglib-2.0.so.0
    libxcb.so.1
    libkrb5.so.3
    libk5crypto.so.3
    libcom_err.so.2
    libkrb5support.so.0
    libkeyutils.so.1
    libresolv.so.2
    libpxbackend-1.0.so
    libgobject-2.0.so.0
    libcap.so.2
    libGLdispatch.so.0
    libGLX.so.0
    libfreetype.so.6
    libgraphite2.so.3
    libicudata.so.75
    libpcre2-8.so.0
    libXau.so.6
    libXdmcp.so.6
    libcurl.so.4
    libgio-2.0.so.0
    libduktape.so.207
    libffi.so.8
    libbrotlidec.so.1
    libnghttp3.so.9
    libnghttp2.so.14
    libidn2.so.0
    libssh2.so.1
    libpsl.so.5
    libgmodule-2.0.so.0
    libmount.so.1
    libbrotlicommon.so.1
    libunistring.so.5
    libblkid.so.1
    

    I don’t know why it links to a systemd library. Here are the runtime dependencies of rook:

    linux-vdso.so.1
    libresolv.so.2
    libc.so.6
    /lib64/ld-linux-x86-64.so.2
    

    Don’t get me wrong: KeePassXC is one of my favorite programs. But don’t leave it running all the time, and it can’t be run on headless systems.



  • I don’t know anything about the Zero Trust network you’re working with, but this is essentially the same as what I’m doing with Home Assistant. It runs on the LAN, because it’s controlling everything in my house. The server is on a battery backup, most of my devices are z-wave, and several are battery powered. I can lose internet and power to the house, and still disarm the alarm and unlock the front door, at least until the UPS runs out, which is several hours.

    Since HA is on my LAN, accessing it while traveling requires exposing my server to the internet, which terrifies me. I do have VPSes, though, and I have one locked down s.t. it’s only accessible via VPN. It’s not exposing any ports to the WAN except the Wireguard ports. To get to my HA, I connect to that one VPS via the VPN, which is on a VPN subnet with my home server.

    The downside is that it is not possible to access my LAN (and, therefore, my HA server) without a pre-configured client. If I don’t have my laptop or phone, I can’t get to my LAN. If my VPS went down, I couldn’t get to my LAN. And, obviously, if my home internet goes down, I can’t get to my LAN. I’d rather be safe than sorry, though.



  • I think largely we are aligned on what we are looking for in a platform. The private blog idea is interesting. I normally consider blogs as public, are there private blog platforms?

    Sure. If nothing else, you could proxy it through an authenticated endpoint, requiring people to log on to view it. But I don’t know the blogging software space very well - there are probably projects with built-in support for this. I’ve started looking around; I suspect the ideal platform isn’t so much a blogging platform, but it’s designed more around a blog design.

    If you come across one, please let me know! I’ll keep updating that CryptoPad document. I also started a spreadsheet, which is better suited to the data than a document table, but CryptoPad doesn’t have the ability to embed assets from other documents (other than images), so I’m just doing the table manually.

    On the other hand, projects die when the maintainers lose interest.

    Absolutely. Good projects attract multiple maintainers; there’s a bit of Darwinism there. When one project I used was archived, I offered to take over maintainership; the author didn’t want to hand it over to me, so I hard forked it and worked with distributions to replace the no-longer-maintained version with mine. It’s the OSS lifecycle, right? And the best thing about OSS - if the maintained loses interest, someone else can simply take over. And if no-one does, maybe it isn’t worth maintaining.

    I would like a platform that I know is going to stick around.

    This is so important! Especially for this purpose. Getting several people to join a platform and then put content on it introduces a lot of technical inertia. That’s why it’s important for me to reduce the odds of the project changing their terms of use; increasing costs; moving popular, free features to the “paid” column; and other shenanigans.

    On the other hand, something like Zusam, if the maintainer loses interest it will likely also die.

    See, I don’t believe this. It’s possible the project would die, but so often have popular projects lost their maintainers, and new people step in. They fork it, or have a peaceful transition of ownership, but the project carries on. Yes, some just disappear into obscurity, but the popular ones tend to keep going, sometimes under other names. X11 to XOrg; OpenOffice to LibreOffice; OwnCloud to NextCloud; so on and so forth. And increasingly, many projects add data migration paths from other projects, especially if they’re popular. Many ActivityPub servers can import Mastodon account data, for instance.

    I do have reservations about HumHub, but it’s the first platform I’ve seen that even comes close to being a familiar feel for users.

    It does look pretty close to ideal for what we’ve been discussing; I need to install it and try it out, because so far all other options have failed in some way. There’s another forest of options in the blogging style, so I’m still optimistic, but I may try HumHub anyway.

    I’m considering the other idea of using Dokuwiki as well, which I guess comes in as being more similar to your blogging idea.

    Yeah, that was an interesting avenue; I suspect the user client experience will be where that fails for me. It can’t require any technical expertise.