• 0 Posts
  • 18 Comments
Joined 1 year ago
cake
Cake day: August 15th, 2023

help-circle
  • I am curious as to why they would offload any AI tasks to another chip? I just did a super quick search for upscaling models on GitHub (https://github.com/marcan/cl-waifu2x/tree/master/models) and they are tiny as far as AI models go.

    Its the rendering bit that takes all the complex maths, and if that is reduced, that would leave plenty of room for running a baby AI. Granted, the method I linked to was only doing 29k pixels per second, but they said they weren’t GPU optimized. (FSR4 is going to be fully GPU optimized, I am sure of it.)

    If the rendered image is only 85% of a 4k image, that’s ~1.2 million pixels that need to be computed and it still seems plausible to keep everything on the GPU.

    With all of that blurted out, is FSR4 AI going to be offloaded to something else? It seems like there would be a significant technical challenges in creating another data bus that would also have to sync with memory and the GPU for offloading AI compute at speeds that didn’t risk create additional lag. (I am just hypothesizing, btw.)








  • remotelove@lemmy.catoSelfhosted@lemmy.worldHDD data recovery
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 months ago

    It was on old 3.5" drives a long time ago, before anything fancy was ever built into the drives. It was in a seriously rough working environment anyway, so we saw a lot of failed drives. If strange experiments didn’t work to get the things working, mainly for lulz, the next option was to see if a sledge hammer would fix the problem. Funny thing… that never worked either.




  • Maybe? Bad cables are a thing, so it’s something to be aware of. USB latency, in rare cases, can cause problems but not so much in this application.

    I haven’t looked into the exact ways that bad sectors are detected, but it probably hasn’t changed too much over the years. Needless to say, info here is just approximate.

    However, marking a sector as bad generally happens at the firmware/controller level. I am guessing that a write is quickly followed by a verification, and if the controller sees an error, it will just remap that particular sector. If HDDs use any kind of parity checks per sector, a write test may not be needed.

    Tools like CHKDSK likely step through each sector manually and perform read tests, or just tells the controller to perform whatever test it does on each sector.

    OS level interference or bad cables are unlikely to cause the controller to mark a sector as bad, is my point. Now, if bad data gets written to disk because of a bad cable, the controller shouldn’t care. It just sees data and writes data. (That would be rare as well, but possible.)

    What you will see is latency. USB can be magnitudes slower than SATA. Buffers and wait states are causing this because of the speed differences. This latency isn’t going to cause physical problems though.

    My overall point is that there are several independent software and firmware layers that need to be completely broken for a SATA drive to erroneously mark a sector as bad due to a slow conversion cable. Sure, it could happen and that is why we have software that can attempt to repair bad sectors.







  • This bubble is quite bubbly. There is an AI company for anything and everything now. The market is almost fully saturated with “AI” everything.

    Just like the web bubble, all of the intsta-AI shops need to fail so the real tech can grow. AI is never going to go away, but most of the scam companies will fail in due time.

    We might have one big consolidation, or several. The hype will die and the quick money will disappear. It’s the same story, every time. One the magic AI box stops shitting out dollar bills, we should be good.