Is blue hair a fetish now?
Undertale brainrot
Pronouns: they/them is preferred but you can call me something else if you really want to
Is blue hair a fetish now?
Are Linux kernel lifespans usually that short?
I really like Lunatask. It’s a task/habit management app kind of like Todoist, but it works better for me personally. The premium version is quite expensive, but the free one is quite okay to work with. And it’s still in development so a lot of features are missing (you can’t set a time for a task for example which I find ridiculous).
Also Ghostwriter, it’s a really nice minimalistic markdown editor. I wish it was a bit more customizable but I guess I could try emacs for that.
Setting up nvidia drivers wasn’t an issue? Well then I guess I was stupid or just extremely unlucky. I ran into so many driver issues on Mint it’s ridiculous.
I always thought a kernel panic ended the graphical session… Turns out I was wrong. Again.
Something that gives you a reminder after a certain time of using a specific program (a game for example). I wanted to make it on my own but my coding skills are absolute garbage so it probably wouldn’t work very well.
Okay so this might sound kinda crazy but
I think it’s broken
I still find them cool. I’m kind of into retro phones I guess
I probably just fell for the most obvious ragebait in existence
but in the unlikely event that you are actually being serious then owning everything would probably wreck your entire system at some point whether directly or not. and looking through the github page it doesn’t seem that hard to install to me, just copy paste one command and you’re done with it… idk never actually had the need to use it.
chatgpt only generates text. that’s how it was supposed to work. it doesn’t care if the text it’s generating is true, or if it even makes any sense. so sometimes it will generate untrue statements (with the same confidence as the ‘linux gatekeepers’ you mentioned, except with no comments to correct the response), no matter how well you train it. and if there’s enough wrong information in the dataset, it will start repeating it in the responses, because again, its only real purpose is to pick out the next word in a string based on the training data it got. sometimes it gets things right, sometimes it doesn’t, we can’t just blindly trust it. pointing that out is not gatekeeping.
deleted by creator