• 2 Posts
  • 67 Comments
Joined 3 years ago
cake
Cake day: January 23rd, 2022

help-circle
  • When does Debian update a package? And how does it decide when to?

    These both can be answered in depth at Debian’s releases page, but the short answer is:

    Debian developers work in a repo called “unstable” or “sid,” and you can get those packages if you so desire. They will be the most up to date, but also the most likely to introduce breaking changes.

    When the devs decide these packages are “stable enough,” (breaking changes are highly unlikely) they get moved into “testing” (the release candidate repo) where users can do QA for the community. Testing is the repo for the next version of debian.

    When the release cycle hits the ~1.5 year mark, debian maintainers introduce a series of incremental “freezes,” whereby new versions of packages will slowly stop being accepted into the testing repo. You can see a table that explains each freeze milestone for Trixie (Debian 13) here.

    After all the freezes have gone into effect, Debian migrates the current Testing version (currently Trixie, Debian 13) into the new Stable, and downgrades the current stable version to old-stable. Then the cycle begins again

    As for upgrades to packages in the stable/old-stable repos: see the other comments here. The gist is that they will not accept any changes other than security patches and minor bug fixes, except for business critical software that cannot just be patched (e.g. firefox).




  • The point of security isn’t just protecting yourself from the threats you’re aware of. Maybe there’s a compromise in your distro’s password hashing, maybe your password sucks, maybe there’s a kernel compromise. Maybe the torrent client isn’t a direct route to root, but one step in a convoluted chain of attack. Maybe there are “zero days” that are only called such because the clear web hasn’t been made aware yet, but they’re floating around on the dark web already. Maybe your passwords get leaked by a flaw in Lemmy’s security.

    You don’t know how much you don’t know, so you should be implementing as much good security practices as you can. It’s called the “Swiss Cheese” model of security: you layer enough so that the holes in one layer are blocked by a different layer.

    Plus, keeping strong security measures in place for something that’s almost always internet connected is a good idea regardless of how cautious you think you’re being. It’s why modern web-browsers are basically their own VM inside your pc anymore, and it’s why torrent clients shouldn’t have access to anything besides the download/upload folders and whatever minimal set of network perms they need.



  • BaumGeist@lemmy.mltoLinux@lemmy.mlLinux middle ground?
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    2 months ago

    Debian Testing has a lot more current packages, and is generally fairly stable. Debian Unstable is rolling release, and mostly a misnomer (but it is subject to massive changes at a moment’s notice).

    Fedora is like Debian Testing: a good middleground between current and stable.

    I hear lots of good things about Nix, but I still haven’t tried it. It seems to be the perfect blend of non-breaking and most up-to-date.

    I’ll just add to: don’t believe everything you hear. Distrowars result in rhetoric that’s way blown out of proportion. Arch isn’t breaking down more often than a cybertruck, and Debian isn’t so old that it yearns for the performance of Windows Vista.

    Arch breaks, so does anything that tries to push updates at the drop of a hat; it’s unlikely to brick your pc, and you’ll just need to reconfigure some settings.

    Debian is stable as its primary goal, this means the numbers don’t look as big on paper; for that you should be playing cookie clicker, instead of micromanaging the worlds’ most powerful web browser.

    Try things out for yourself and see what fits, anyone who says otherwise is just trying to program you into joining their culture war


  • You intentionally do not want people that you consider “below” you to use Linux or even be present in your communities.

    No, but I do want my communities to stay on-topic and not be derailed by Discourse™

    Who I consider beneath me is wholly unrelated to their ability to use a computer, and entirely related to their ability to engage with others in a mature fashion, especially those they disagree with.

    Most people use computers to get something done. Be it development, gaming, consuming multimedia, or just “web browsing”

    I realize most people use computers for more than web-browsing, but ask anybody who games, uses multimedia software, or develops how often they have issues with their workflow.

    (which you intentionally use to degrade people “just” doing that)

    No I don’t. Can you quote where I did so, or is it just a vibe you got when reading in the pretentious dickwad tone you seem to be projecting onto me?

    But stop trying to gatekeep people out of it

    I’m not, you’re projecting that onto me again. If you want to use Linux, use Linux. Come here and talk about how you use Linux, or ask whatever questions about Linux you want. If you don’t want to use Linux, or don’t want to to talk about Linux, take it to the appropriate community.

    If keeping communities on-topic and troll-free is “gatekeeping,” then I don’t give a fuck how you feel about it.


  • I don’t think we do, but that’s a feature, not a bug. Here’s why:

    1. There was a great post a few days ago about how Linux is a digital 3rd Space. It’s about spending time cultivating the system and building a relationship with it, instead of expecting it to be transparent while you use it. This creates a positive relationship with your computer and OS, seeing it as more a labor of love than an impediment to being as productive as possible (the capitalist mindset).

    2. Nothing “just works.” That’s a marketing phrase. Windows and Mac only “just work” if the most you ever do is web-browsing and note-taking in notepad. Anything else and you incite cognitive dissonance: hold onto the delusion at the price of doing what you’re trying to do, or accept that these systems aren’t as good as their marketing? The same thread I mentioned earlier talked about how we give Linux more lenience because of the relationship we have with it, instead of seeing it as just a tool for productivity.

    3. Having a barrier of entry keeps general purpose communities like this from being flooded with off-topic discourse that achieves nothing. And no, I’m not just talking about the Yahoo Answers-level questions like “how to change volume Linux???” Think stuff like “What’s the most stargender-friendly Linux distro?” and “How do we make Linux profitable?” and “what Linux distro would Daddy Trump use?” and “where my other Linux simping /pol/t*rds at (socialist Stallman****rs BTFO)???” Even if there is absolutely perfect moderation and you never see these posts directly, these people would still be coming in and finding ways that skirt the rules to inject this discourse into these communities; and instead of being dismissed as trolls, there would be many, many people who think we should hear them out (or at least defend their right to Free Speech).

    4. Finally, it already “just works” for the aforementioned note-taking and web-browsing. The only thing that’s stopping more not so tech-savvy people is that it’s not the de facto pre-installed OS on the PC you pick up from Best Buy (and not Walmart, because you want people to think you’re tech-savvy, so you go to the place with a dedicated “geek squad”). The only way it starts combating Windows in this domain is by marketing agreements with mainstream hardware manufacturers (like Dell and HP); this means that the organization responsible for representing Linux would need the money to make such agreements… Which would mean turning it into a for-profit OS. Which would necessitate closing the source. Which would mean it just becomes another proprietary OS that stands for all that Linux is against.




  • You’ve defined yourself into an impossible bind: you want something extremely portable, universal but with a small disk imprint, and you want it to be general purpose and versatile.

    The problem is that to be universal and general purpose, you need a lot of libraries to interact with whatever type of systems you might have it on (and the peculiarities of each), and you need libraries that do whatever type of interactions with those systems that you specify.

    E.g. under-the-hood, python’s open("<filename>", 'r') is a systemcall to the kernel. But is that Linux? BSD? Windows NT? Android? Mach?

    What if you want your script to run a CLI command in a subshell? Should it call “cmd”? or “sh”? or “powershell”? Okay, okay, now all you need it to do is show the contents of a file… But is the command “cat” or “type” or “Get-FileContents”?

    Or maybe you want to do more than simple read/write to files and string operations. Want to have graphics? That’s a library. Want serialization for data? That’s a library. Want to read from spreadsheets? That’s a library. Want to parse XML? That’s a library.

    So you’re looking at a single binary that’s several GBs in size, either as a standalone or a self-extracting installer.

    Okay, maybe you’ll only ever need a small subset of libraries (basic arithmetic, string manipulation, and file ops, all on standard glibc gnu systems ofc), so it’s not really “general purpose” anymore. So you find one that’s small, but it doesn’t completely fit your use case (for example, it can’t parse uci config files); you find another that does what you need it to, but also way too much and has a huge footprint; you find that perfect medium and it has a small, niche userbase… so the documentation is meager and it’s not easy to learn.

    At this point you realize that any language that’s both easy to learn and powerful enough to manage all instances of some vague notion of “computer” will necessarily evolve to being general purpose. And being general purpose requires dependencies. And dependencies reduce portability.

    At this point your options are: make your own language and interpreter that does exactly what you want and nothing more (so all the dependencies can be compiled in), or decide which criteria you are willing to compromise on.



  • I have a Libre LePotato, Pinebook and Pinephone. They’re fine for most of my use cases, but they don’t handle games too well. They are also not great for VMs or emulation, and no chance in hell would I use any for my home media server.

    That being said, I’m starting to see ARM CPU desktops in my feeds, and I think one of those would be fine for everything but gaming (which is more an issue of the availability of native binaries and not necessarily outright performance). TBH at that price point, using off-chip memory and GPU, I don’t see much reason to go with ARM; maybe the extra cores, but I can’t imagine there is much in the way of electrical efficiency that SoCs entail.


  • I’ve been running Debian stable on my decade-old desktop for about 3 years, and on my ideapad that’s just as old for about 5. During that time I had an update break something only once, and it was the Nvidia driver what did it. A patch was released within a three days.

    Debian epitomizes OS transparency for me. Sure, I can still customize the hell out of it and turn it into a frankenix machine, but if I don’t want to, I can be blissfully unaware of how my OS works, and focus only on important computing tasks (like mindlessly scrolling lemmy at 2 am).


  • I use virt-manager. Works better than virtualbox did at the time (back while v6.1 was still the main release branch), it’s easier, and it doesn’t involve hitching yourself to Oracle.

    VMWare may be “free,” but it ain’t free. And if you don’t care about software freedom, why choose Linux over Windows or MacOS? Also, Workstation Player lacks a lot of functionality that makes it not good as a hypervisor. Only one VM can be powered at a time, and all the configuration is severely limited. Plus the documentation is mediocre compared to the official virt-manager docs.



  • I replaced windows on my laptop with Ubuntu and stopped using it after realizing how unimpressed I was with the difference. Years later I took the OSCP course, and they required using Kali.

    From there I fell in love. Things that would have taken hours and weird 3rd party installers to do in Windows came with the OS or were in the official repos. The CLI showed me unimaginable power over every bit of the computer, and in windows the Conmand Prompt CLI is pretty mediocre; Powershell is better, but is more about data processing than running software. Linux has SSH and Python installed with one sentence, windows graphical installers are a bloated nightmare. There wasn’t random shitty third party software installed by the OEM who struck a deal with the OS maintainers.

    After that, it was a cascade of disillusionment. Those nasty 3rd party apps I didn’t install showing up in my start menu? Actually ads, I was just using cognitive dissonance to avoid admitting that. And the proprietary programs aren’t better, they update more frequently just to introduce ads, harvest more data, and change their layout to make it seem like they did anything to help the end users.

    Why does changing any meaningful settings require tampering in the registry? Why is this low level stuff documented so poorly? Why can’t I turn off telemetry completely? Why can’t I check what code is running in the kernel that I purchased and am running ON MY COMPUTER??? IT’S MY COMPUTER, NOT MICROSOFT’S. Why the FUCK should I let them run code that I can’t legally review, much less change, on it?

    If someone offered you a meal but refused to tell you about any of the ingredients, you just wouldn’t eat it. Not “you’d be suspicious,” it goes beyond that: you’d be too suspicious to eat it. If someone offered you a home security system that you could have “spy on you minimally” you’d tell them where they could stick it. If it came with your house, you’d remove it immediately. If either of those people tried to charge you for it, you’d laugh in their face.

    Yet for some reason, when it’s our computers doing the spying and whatever else we can’t verify, we’ve learned to just put up with it? This is BULLSHIT.

    And I have too much pride to be treated like a mark, I won’t take being scammed lying down anymore. I’m not a hapless dipshit who just lets people have their way with her because it’s “too hard to learn new things.” I’ve always said I have some integrity to protect, so I better prove it or forever be a hypocrite.

    I already use only Linux at home, I’d have to get my company to switch to let me run it at work.


  • BaumGeist@lemmy.mltoLinux@lemmy.mlQustions
    link
    fedilink
    arrow-up
    3
    ·
    3 months ago

    I suggest reading through multiple answers despite everyone answering all your questions, this way you get the most complete answers. As such, here’s my two cents:

    1. Yes, search for “Widgets” at Gnome’s official website to see: https://extensions.gnome.org/

    2. Depends on what you mean by “problematic”? My laptop refused to go to sleep because of a setting in the wifi card, but once I changed it I haven’t had any issues. You may also find that some of your hardware is nonstandard, and therefore requires extra steps during installation.

    3. What do you mean “minimum”? Because I installed Debian headless, and starting with nothing but a command-line and the system utilities and nothing else installed is what I heard, but maybe in your mind it just means a graphical desktop and nothing more. If you did mean that, you could try something like MATE for your desktop environment, or XFCE if you want to learn by customizing. If you’re feeling really adventurous, use SwayWM

    4. Depends on how it came installed, but generally it’s easy. Most of the time, starting out it will be as easy as running the uninstall command for whatever package management software installed it.

    5. “Rooting” a device refers to installing untrusted firmware on SoC devices. Unless your laptop is a chromebook, you probably don’t need to worry about that. Dual-booting Windows and Linux won’t stop Windows from updating, nor stop whatever application manages your firmware from working in Windows, if that’s what you’re worried about.

    6. It depends on your distro and its package manager(s). In Debian it’s as easy as sudo apt install <Desktop Environment> and then logging out, changing which DE you’re logging into, and then logging back in. Most are going to be that way

    7. Lazy answer: don’t worry about it, and don’t worry about it. If you’re the type who wants their PC to “just work,” it’s behind-the-scenes stuff that will never apply to you. If you’re prepared to get down in the weeds, occasionally break things, and customize every aspect of your OS, then you’ll learn when it’s relevant. If you’re saying “Lazy question” and not showing that you already did some research on the topics, you’re most likely in the former camp; this isn’t a value judgment, just an observation.

    But, since we’re all still nerds here regardless of what we’re nerdy about, and since learning almost never hurts, I’ll throw some vocab at you to get you started:

    Wayland is a specification of how software should display things on the screen, it’s the generic blueprints of how Display Servers and their Clients should behave; Wayland is seeking to replace the X Window System specification, and specifically the popular Xorg Server implementation.

    Docker is a containerization platform (software ecosystem). Containers are essentially a small subset of Virtual Machines (or VMs) which are Guest operating systems that run within a separated off environment from your Host operating system. On Linux, features like namespaces, cgroups, and chroots are used to achieve this effect. Containers tend to use less hardware than Hypervisor-hosted VMs, but also tend to be single-purpose systems.