I’ve not tried GPT4ALL but Ollama combined with Open WebUI is really great for selfhosted LLMs and can run with podman. I’m running Bazzite too and this is what I do.
I’ve not tried GPT4ALL but Ollama combined with Open WebUI is really great for selfhosted LLMs and can run with podman. I’m running Bazzite too and this is what I do.
I see there is an m.2 slot too with what looks to be a Kingston SSD.
I’m still confused what era this laptop is from. It might be a SATA m.2.
Wayland was subject to “first mover disadvantage” for a long time. Why be the first to switch and have to solve all the problems? Instead be last and everyone else will do the hard work for you.
But without big players moving to it those issues never get fixed. And users rightly should not be forced to migrate to a broken system that isn’t ready. People just want a system that works right?
Eventually someone had to decide it was ‘good enough’ and try an industry wide push to move away from a hybrid approach that wastes developer time and confuses users.
I’ve had exactly this happen to me. It was my own fault but it took a bit of work figure out.
I don’t really engage with the online mechanics in Elden Ring… Maybe I should? I’ve put hundreds of hours into the game otherwise. I rate and leave messages but I’ve never summoned help for co-op or invaded people except for Varre’s quest where I always just get obliterated by people who are way better prepared than me.
Backups need to be reliable and I just can’t rely on a community of volunteers or the availability of family to help.
So yeah I pay for S3 and/or a VPS. I consider it one of the few things worth it to pay a larger hosting company for.
I’m from the Midwest US and I know there are words and sounds I pronounce with a Midwestern accent but I can still type and spell them correctly.
If’n I typ lik dis den o’course people gonna think I hev the big dumb or that I’m a mole from a Redwall book.
Unironically Powershell is great and learning it has propelled me through the last 12 years of my career as a Sysadmin. My biggest complaints with it are generally Windows complaints or due to legacy powershell modules.
I intentionally do not host my own git repos mostly because I need them to be available when my environment is having problems.
I make use of local runners for CI/CD though which is nice but git is one of the few things I need to not have to worry about.
No need to optimize when you can just push people to upgrade their hardware more frequently so you make fat stacks of cash from OEM’s.
Well it may not be accurate or effective, but at least it’s expensive.
Do you have any links or guides that you found helpful? A friend wanted to try this out but basically gave up when he realized he’d need an Nvidia GPU.
I’ve been testing Ollama in Docker/WSL with the idea that if I like it I’ll eventually move my GPU into my home server and get an upgrade for my gaming pc. When you run a model it has to load the whole thing into VRAM. I use the 8gb models so it takes 20-40 seconds to load the model and then each response is really fast after that and the GPU hit is pretty small. After I think five minutes by default it will unload the model to free up VRAM.
Basically this means that you either need to wait a bit for the model to warm up or you need to extend that timeout so that it stays warm longer. That means that I cannot really use my GPU for anything else while the LLM is loaded.
I haven’t tracked power usage, but besides the VRAM requirements it doesn’t seem too intensive on resources, but maybe I just haven’t done anything complex enough yet.
I’ve been using ZFS now for a few years for all my data drives/pools but I haven’t gotten brave enough to boot from it yet. Snapshotting a system drive would be really handy.
I thought it was just a meme.
I see way more complaints about ‘elitist Arch users’ than I ever do comments from actual elitist Arch users.
DuckDNS is great… but they have had some pretty major outages recently. No complaints, I know it’s an extremely valuable free service but it’s worth mentioning.
Cloudflare has an api for easy dynamic dns. I use oznu/docker-cloudflare-ddns to manage this, it’s super easy:
docker run \
-e API_KEY=xxxxxxx \
-e ZONE=example.com \
-e SUBDOMAIN=subdomain \
oznu/cloudflare-ddns
Then I just make a CNAME for each of my public facing services to point to ‘subdomain.example.com’ and use a reverse proxy to get incoming traffic to the right service.
Google is stuck because they can’t actually improve user experience without threatening their revenue model.
They also specifically warn that it’s not optimized for a VM right now. It’s still not quite ready on bare metal, but less so in a VM.
I have yet to have any success with Bottles but I assume it’s because I don’t know what I’m doing and I’m trying with software known to be difficult.
I remote into a Windows PC for Fusion 360 and Affinity suite but if I could get those working on Linux I’d be in really good shape.