• 0 Posts
  • 40 Comments
Joined 1 year ago
cake
Cake day: November 7th, 2023

help-circle




  • me mentioning the snapdragon x elite is the situation. it doesnt have good battery life in the usecase this while topic is about (gaming). your comment sounds like you read the reviews and didnt understand which functions excelled in battery life, and which ones didnt.

    the whole point is just because something is Arm, doesnt automatically make it more efficient in all usecases. what’s the point in a gaming device thats less efficient when its gaming.


  • the thing is, people are attributing it to ARM, rather than how Apple handles their OS. its the sole reason why Snapdragon X Elite wasn’t that great on Windows, because ultimately, the problem wasn’t about x86 vs Arm, but it was about how windows handled low powered operations. If valve makes a piece of hardware that’s arm based, they clearly aren’t going to be using OSX for any reason. You can tell by the discussion because you can easily name which generation processor you run on a MBP, but fail to mention the cpu models for either the AMD nor intel powered machines and gives the aura of equivalent playing fields when it fundamentally wont.

    Just because Apple with their heavily controlled OS space can make the transition to ARM work flawlessly for batterylife doesn’t mean it applies to all other ARM devices. Arm definitely does some aspects better, but it’s not by default better in every situation due to the nature of the environment that surrounds said hardware is. The power efficiency only exists if all applications are recompiled to target said hardware. For a gaming device, it’s not going to be very useful because very few games that Valve would target have an arm based build. You get into the problem that emulators have. things like proton is a translation layer and suffers much less overhead (e.g why mobile phones can do switch emulation for instance(arm to arm based translation layer) but no phone remotely will do ps3 emulation (arm to ibm cell processor), despite console wise, being roughly the same in performance.

    It’s the sole reason why Apples dev kit for games doesn’t run games like proton does(where it can legit run games better than original if its using an older API). Because architecture changes isn’t just a translation layer, theres a layer of emulation to it, which while can be hardware accelerated if done right, is never 1:1 like a translation layer is.

    Want to test how your MBP battery life is on a different environment not entirely tailored to Apple, run Asahi Linux for example and you will notice immediately that the battery life isn’t the same. (asahi linux is a fedora based distro tailored for M series machines)


  • keep in mind, for the longest time Intels processors were still on Intels fab. a huge chunk of the efficiency/performance gains was less x86 > arn and more Intel Fab > TSMC. even to a lesser extent, compare the snapdragon 8 gen 1 to the snapdragon 8+ gen 1. Samsung wasn’t as far behind tsmc (compared to intel) at the time and both designs basically are the same chip but implemented at two different fabs.

    It also involves how manufacturers decide how to handle price performance. Most laptop manufacturers see any performance lost due to clocking it low bad for sales(so they agressively clock it higher for performance) causing louder fans. Apple takes the opposite approach, where they tune it for noise performance because they control what people see on their graphs (while being misleading, by essentially never including anything faster than it) and asking users to pay top dollar for the top tier fab runs (apple essentially has top cut priority at TSMC) so they always get to see the bleeding edge efficiency nodes/performance before anyone else does at the higher cost to them(which is then passed on to the consumer)


  • the lighter workloads isn’t like stardew valley levels workloads, it would be like watching a video level loads. Just being arm doesn’t outright make it that battery friendly, its like the non application use(e.g sleep, super basic app) where the battery level is better. The qualcomm laptop reviews kind of show that platform when its battery life is mildly better than last gen amd/intel chips and worse under gaming. Qualcomm rushed the release because they new they needed to release before AMD’s Strix Point and Intels Lunar Lake to make it look like they were more efficient. (X elite was on TSMC N4, Meteor lake was on N5/N6, Phoenix and Strix were on N4X, but they knew AMD would have the highest NPU performance had it released first.

    the BIGGEST flaw that the arm based designs have that isn’t tegra is that their graphics drivers are inferior to both Nvidia and AMD, and graphics drivers play a huge role in whether something works correctly or not.


  • i mean better efficiency is one thing, but having “so much better power efficiency” isn’t that large, especially under load. Arms major advantage is efficiency while doing lighter workloads, which is kinda the antithesis of a gaming device would be.

    What arm based designs excel at is if whatever workload utilizes some of the specific built hardware in them, which is why the modems and camera image processor on the snapdragon cpus are better than x86, because x86 designs dont really have dedicated hardware for those functions integrated fully(intel cpus do to some extent)


  • the idea of it improving battery is that generating frames is less performance intensive than running a certain framerate (e.g 60 fps capped game with frame gen at double the framerate consumes less power than running the same game at 120 fps). though its slightly less practical because frame generation only makes sense when the base framerate is high enough (ideally above 60) to avoid a lot of screen artifacting. So in practical use, this only makes sense to “save battery” in the context that you have a 120hz+ screen and choose to cap framerate to 60-75fps.

    If one is serious about minmaxing battery to performance in a realistic value, people should have the screen cap at 40 hz, as it has half of the input latency between 30 and 60 fps, but only requires 10 more fps than 30 which is a very realistic performance target for maintaining a minimum on handheld.


  • employment potential and learning are generally problems if you are young. if you are old, the time investment to learn a new language is generally not self beneficial as your time of employability starts to dwindle.

    Linux ultimately will have to run into the situation of if the people want the newer language to become the mainstream, they need to be more proactive at the development of the kernel itself instead of relying on yhe older generation, who does ot the way they only know how, as relearning and rewriting everything ultimately to them, a waste of time at their point in life.

    think like proton was for gaming. you dont(and will not) convince all devs to make linux compatible games using a vulkan branch. the solution in that front was to create a translation layer to offload most of that work off because its nonsensical to expect every dev to learn vulkan. this would be applied moreso to the linux kernel, so the only realistic option (imo) is that the ones who are working in rust need to make the rust based kernel and hope that it takes off in a few years to actually gain traction.



  • devs on pc have to decide which set of hardware to optimize for. it’s a step that they choose based on harwdare adoption trends. There is always a point where something is too hardware demanding that it would greatly hinder sales when making a decision. With a fixed hardware platform, devs have a concentrated point in hardware adoption to target.

    For instance, say you developed a game where the minimum hardware requirement was slightly higher than a steam deck. If enough steam deck sales exist, the dev might have an incentive to optimize the game more just to get access to said market.








  • no, because the tegra x1 was a processor originally designed for nvidia shield tv and jetson developer boards. companies like nintendo for the switch and google for the pixel c tablet, used the tegra x1 as an off the shelf chip, which is why all of the listed devices are suscceptable to the rcm exploit, as they are the same chip.

    semi custom means they are key functionality added to the chip from oem designs that fundamentally make it different. e. g valve has zen 2 + rdna 2 igpu instead of the off the shelf zen 3 + rdna 2 option. Sony for example has a memory accelerator on the PS5 to give the PS5 faster data streaming capability than standard designs. and supposedly have a compute block for the PS5 pro supposedly for better resolution scaling and ray tracing than standard amd designs.

    Nvidia not doing semi custom is the main reason why Apple stopped using nvidia after the GTX 670 in their imac pro lines in favor of AMD, and for example, why nvidia is very strict on the form factor theor gpus are in (e. g theres a reason why a smaller egpu doesnt really exist much for nvidia gpus, while the AMD option is more common, despite being less bought by consumers)