Happy to report that the 555 beta still just gives me a black screen before rebooting the computer. Though after a few attempts it did display some garbled Greg shit.
Happy to report that the 555 beta still just gives me a black screen before rebooting the computer. Though after a few attempts it did display some garbled Greg shit.
Like. I can’t even rub Wayland on my 4090. Its a black screen. This happens with manjaro kde. With mint I can at least see my (frozen, unresponsive, unusable) desktop.
This all sounds cool and stuff but I kind of wish people would, like, shut the fuck up about Wayland? My understanding is that NY experience.is far from unique. People that own PCs have nvidia cards. Unless “the year of the Linux desktop” involves everyone vaporating anmd cards that magically have cuda cores somehow out of their asses then nothing about Wayland really matters to us.
You can “get an and” card to me all you want, but here’s the thing: I don’t fucking want one. I use my cuda cores. Its why I spent as much as I did on a 4090.
I guess 555 is supposed to make Wayland work with nvidia?
I mean, look. Using an nvidia card with Linux, and getting the requisite drivers working, can be am experience akin to having your has deferens ripped out by an aging badger. I get it. But until I can nvidia while I Wayland I just don’t care. And I’m not alone.
Oh yeah the keyboard is awful.
But it doesn’t spy on me so. Everyone else gets to suffer.
I never understood the appeal of discord.
Like. It’s IRC with voice chat. I’m sure making a voice chat client is not trivial but weve had IRC for like 30 fucking years.
I be been fiddling with home assistants voice thing a bit and like wvwry4hing home assistant the process has been frustrating and bordering on Kafkaesque. I bought these atom echo things they recommend which don’t seem to make the best google home replacements, and in struggling to figure out how to get home assistant to pipe the sound out of another device, thereby making them useful.
Admittedly this may be simpler if all I was looking to do is say things and have stuff happen in a default voice model, but I fine tuned my own RTS voice model(s) and am looking to be able to use them for controlling homeass as well as for general inference when i feel like it.
I’ve spent some tim3, not a lot but some, trying to find out what devices can be m2dia players and under what conditions and how (or whether) you can use esp home to pipe audio through the media player / use USB mics as microphones for the voice stuff.
I’m kind of at a loss as far as understanding what the actual intention was for homeless’ year of the voice, so I’ve be3n thinking that maybe offloading some of my goals to a container or VM on TNT server running homeless on proxmox may be a better path forward. I came across this post just in time it seems.
And if you are?