• 0 Posts
  • 31 Comments
Joined 1 year ago
cake
Cake day: July 31st, 2023

help-circle

  • That’s not the point, though. The point is to use a nominal type that asserts an invariant and make it impossible to create an instance of said type which violates the invariant.

    Both validation functions and refinement types put the onus on the caller to ensure they’re not passing invalid data around, but only refinement types can guarantee it. Humans are fallible, and it’s easy to accidentally forget to put a check_if_valid() function somewhere or assume that some function earlier in the call stack did it for you.

    With smart constructors and refinement types, the developer literally can’t pass an unvalidated type downstream by accident.


  • You’re going to need to cite that.

    I’m not familiar with C23 or many of the compiler-specific extensions, but in all the previous versions I worked with, there is no type visibility other than “fully exposed” or opaque and dangerous (void*).

    You could try wrapping your Foo in

    typedef struct {
        Foo validated
    } ValidFoo;
    

    But nothing stops someone from being an idiot about it and constructing it by hand:

    ValidFoo trustMeBro;
    trustMeBro.validated = someFoo;
    otherFunction(trustMeBro);
    

    Or even just casting it.

    Foo* someFoo;
    otherFunction((ValidFoo*) someFoo);
    

  • If it were poorly designed and used exceptions, yes. The correct way to design smart constructors is to not actually use a constructor directly but instead use a static method that forces the caller to handle both cases (or explicitly ignore the failure case). The static method would have a return type that either indicates “success and here’s the refined type” or “error and this is why.”

    In Rust terminology, that would be a Result<T, Error>.

    For Go, it would be (*RefinedType, error) (where dereferencing the first value without checking it would be at your own peril).

    C++ would look similar to Rust, but it doesn’t come as part of the standard library last I checked.

    C doesn’t have the language-level features to be able to do this. You can’t make a refined type that’s accessible as a type while also making it impossible to construct arbitrarily.


  • Unless you’re a functional programming purist or coming from a systems programming background, it takes a lot longer than a few days to get used to the borrow checker. If you’re coming as someone who most often uses garbage-collected languages, it’s even worse.

    The problem isn’t so much understanding what the compiler is bitching about, as it is understanding why the paradigm you used isn’t safe and learning how to structure your code differently. That part takes the longest and only really starts to become easier when you learn to stop fighting the language.




  • The first directory block is a hole. But type == DIRENT, so no error is reported. After that, we get a directory block without ‘.’ and ‘…’ but with a valid dentry. This may cause some code that relies on dot or dotdot (such as make_indexed_dir()) to crash

    The problem isn’t that the block is a hole. It’s that the downstream function expects the directory block to contain . and .., and it gets given one without because of incorrect error handling.

    You can encode the invariant of “has dot and dot dot” using a refinement type and smart constructor. The refined type would be a directory block with a guarantee it meets that invariant, and an instance of it could only be created through a function that validates the invariant. If the invariant is met, you get the refined type. If it isn’t, you only get an error.

    This doesn’t work in C, but in languages with stricter type systems, refinement types are a huge advantage.






  • Part of the hostility was the other maintainer misunderstanding the presenter, going on a diatribe about how the kernel Rust maintainers are going to force the C code to become unrefactorable and stagnate, and rudely interrupting the presenter with another tangent whenever he (the presenter) tried to clarify anything.

    An unpleasant mix of DM railroading and gish galloping, essentially.

    I wouldn’t quite call it a strawman, but the guy was clearly not engaging in good faith. He made up hypothetical scenarios that nobody asked about, and then denigrated Rust by attacking the scenarios he came up with.

    Edit: I was thinking of the wrong fallacy. It is a strawman, yes.



  • TL;DR: While Intel had their heads shoved up their ass making the Itanium architecture, AMD made a 64-bit variant of x86 that was backward compatible with the older x86 ISA. Technology moved on, and amd64 was adopted while Intel kept trying and failing to push their binary-incompatible architecture.

    Eventually, Intel had to give up and adopt AMD’s amd64 ISA. In exchange for letting them use it, Intel lets AMD use the older x86 ISA.



  • Moore’s Law is Dead shared an interesting video yesterday about these chips. Supposedly, leaks from his sources at Intel say that high voltages being pushed through the ring bus cause degradation. The leaks claim it shares the same power rail as the P and E cores, meaning it’s influenced by the voltage requested by the cores.

    For context, the ring bus is responsible for communication between cores, peripherals, and the platform. This includes memory accesses, which means that if the ring bus fails and does something incorrectly, it could appear normal but result in errors far down the line.

    Going beyond the video specifically, and considering what others have suggested as workarounds, it seems like ring bus degradation might be a decent candidate for the actual root cause of these issues.

    Some observations around chips degrading were:

    • High memory pressure exacerbates the issue.
    • Chips with more cores deteriorate faster.

    Some of the suggestions to work around the issue were:

    • Lower the memory speed.
    • Lower the voltage and clock speeds.
    • Disabling E cores.

    All of those can be related to stress being put on the ring bus:

    • Higher voltage being put through the bus -> higher likelihood of physical damage
    • More memory pressure -> more usage of the bus, more opportunity for damage to accumulate
    • More cores -> more memory pressure
    • Slower memory speeds -> less maximum throughput -> less stress

    I’m not claiming anything definitive, but I think my money is on this one.