• 6 Posts
  • 47 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle



  • Yes .docx.

    It appears as though the encoding is missing in such a way that nothing in Linux recognizes the file. The underlying CLI tools don’t have a way of converting the file. I tried with Python’s docx tool and with iconv. It has to be encoding related because some tools initially load the file with several sets of Asian characters instead of English. However, there is no hexadecimal or sections of entirely binary looking data. Archiving tools do not open up the the file to reveal anything else like a metafile or header. Neo vim shows garbled nonsense throughout. Bat warns of binary. Python won’t load the file, nor will Only Office. Libre Office and Abi Word load initially with Asian characters before crashing.

    The only option is likely gong to be setting up the W10 machine and converting a bunch of files within it.

    Ultimately, my old man thinks he can be an author all of the sudden and is trying to write. He’s not very capable of learning. I’m not confident that he can learn to use FOSS to do the same thing he has been doing. This post was just to see if there are options I am not already aware of that might actually work in practice. I can easily do everything I need in FOSS. I can do everything he needs to do. I’m more concerned about becoming his tech support when he forgets how to copy pasta. He already fails to separate the internet hardware connectivity from the web browser and operating system within his mental model of technology.













  • So software like CAD is funny. Under the surface, 3d CAD like FreeCAD or Blender is taking vertices and placing them in a Cartesian space (X/Y/Z - planes). Then it is building objects in that space by calculating the mathematical relationships in serial. So each feature you add involves adding math problems to a tree. Each feature on the tree is linearly built and relies on the previously calculated math.

    Editing any changes up tree is a massive issue called the topological naming problem. All CAD has this issue and all fixes are hacks and patches that are incomplete solutions, (it has to do with π and rounding floating point at every stage of the math).

    Now, this is only the beginning. Assemblies are made of parts that each have their own Cartesian coordinate planes. Often, individual parts have features that are referencing other parts in a live relationship where a change in part A also changes part B.

    Now imagine modeling a whole car, a game world, a movie set, or a skyscraper. The assemblies get quite large depending on what you’re working on. Just an entire 3d printer modeled in FreeCAD was more than my last computer could handle.

    Most advanced CAD needs to get to the level of hardware integration where generalizations made for something like Wayland simply are not sufficient. Like your default CPU scheduler, (CFS on Linux) is setup to maximize throughput at all costs. For CAD, this is not optimal. The process niceness may be enough in most cases, but there may be times when true CPU set isolation is needed to prevent anything interrupting the math as it renders. How this is split and managed with a GPU may be important too.

    I barely know enough to say this much. When I was pushing my last computer too far with FreeCAD, optimising the CPU scheduler stopped a crashing problem and extended my use slightly, but was not worth much. I really needed a better computer. However looking into the issue deeply was interesting. It revealed how CAD is a solid outlier workflow that is extremely demanding and very different from the rest of the computer where user experience is the focus.



  • Secure boot must have all kernel modules signed. The system that Fedora uses is a way that builds the drivers from source with every new kernel update. It works, but it can’t be modified further.

    The primary issue you will likely come across is that the nvcc compiler is not open source and it is part of the CUDA chain. You can’t build things like lama.cpp without nvcc and have CUDA support. Most example type projects have the same issues. Without nvcc fully open, you are still somewhat limited. Also the toolchain for nvcc screws up the open source built stuff and will put you back at the train wreck of secure boot. If Nvidia had half a working brain, they would open source everything instead of the petty conservative nonsense stupidity that drives proprietary fools. There is absolutely no room in AI for anyone that lacks full transparency.


  • No. You can use either a Fedora distro or regular default vanilla Ubuntu. Both of these package managers have a special shim keys that are signed by a 3rd party program from Microsoft.

    If you want to run anything else, you need to self sign your key for secure boot. Gentoo has killer documentation on how to do this. It doesn’t matter what distro you use. Secure Boot is outside of the Linux kernel. With Fedora, it is handled by their Anaconda system, (no relationship to the Python containers system by the same name).


  • j4k3@lemmy.worldtoLinux@lemmy.mlNiche Distro Users: Why?
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    1 month ago

    It’s like Linux From Scratch… with friends. Every distro has a purpose. I haven’t done super niche. One day I’ll probably try to run Gentoo much more seriously, and maybe an LFS just to see if I can.

    Linux is the realm of all computer science students when it comes time to learn about operating systems, processes, threading, interrupts, schedulers, memory, etc. All levels exist in this space. The major distros all have underlying reasons they exist too. It is not branding/marketing like much of the consumer world.


  • As far as I understand it, wouldn’t the cell library be more like the node equivalent of a KiCAD library for 0402 passive footprints for PCB design? Like here is how we must do gates, buses, etc. But that has nothing to do with the way the ALU is setup or the LCR aspects of a final design? I’ve honestly only watched Asianometry, skimmed Intro to VLSI a few times, dabbled in FPGA, built Ben Eater’s bread board computer, and screwed around with a CPU scheduler to learn why my last computer sucked at complex CAD assemblies. When I was looking for AI hardware to run LLM’s I went deep enough to understand the specific CPU limitation and upon learning about my phone’s matrix coprocessor I tried learning enough to understand why the thing even exists. That lead me to the understanding that a model can be designed for a specific architecture and run MUCH faster and smaller. I explain things as they sit on my road map of understanding, and knowing I’m likely wrong on the edge cases. I am no expert. I’m trying to give anyone enough rope to pull on so that I can find out here I’m wrong and learn. I share because I want to learn, I want to be wrong, but only in a way that I can extend incrementally from my mental roadmap.


  • I appreciate the perspective, and acknowledge that Apple is on an edge node. ARM sells the IP blocks required to create masks. Apple is designing chips as much an a toddler with wooden toy blocks is a master carpenter. They are paying a hefty royalty to ARM both for the design and for every single chip sold. RISC-V obsoletes this business model.

    While it is true that Intel struggles, the Intel/AMD duality and transparency have a history that goes back to the government/enterprise requirement for second sourcing. Perhaps we live in a world where figureheads are completely ignorant of single source extortion and monopolies, but this is the direct inevitability of any move to ARM.

    I believe, the push for ARM is limited to the consumer space, and is attempting to follow the footsteps of smartphones, and Apple, as a method of planned obsolescence through a proprietary POSIX kernel. This move is intended to undercut the market in hopes that people are ignorant enough to sell their right of ownership and get stuck with a worthless product in the long term.

    One could argue that all hardware is a worthless product long term. That has been the case for quire a while, but not as recently, and won’t remain the case in the future. The odds are high that your present mobile device is not much more advanced than your last. If your present device was designed for reparability and durability, it could likely last 5-10 times longer than the orphaned kernel used to steal ownership and prevent you from doing so. That kernel leverages a proprietary binary that supports the hardware and obsoletes the device. Every device model, and even some carrier sub-models have a unique binary that makes no sense to reverse engineer to open source the support. It has been attempted before, such as the LG Hammerhead, but the challenge is too monumental and time consuming.

    All that said, I don’t think anyone has seen where the market is headed right now. If you step back to the big picture abstraction, as I love to do, there is a very important shift that is happening right now. At the present, all processor designs have failed. They use a math coprocessor for matrix multiplication. Thia is an absolute disaster for the CPU architecture. Intel tried this in the early days of x86 with a math coprocessor as an accessory and it failed miserably in practice. Under the surface, this issue comes down to the bandwidth of the L2 to L1 cache memory.

    I have been thinking about this for awhile, and while entirely speculative, I don’t think this is a solvable problem without a 10 year scale redesign from scratch. This area of the die is at the edge of the insanely fast clock speeds in the core. Increasing the bus width here will inevitably cause major issues. This is in the gigahertz regime where electrical properties turn magic with signals that can jump gaps all over the place because everything is a capacitor, a resistor, and an inductor. The power differential of signals in this region is miniscule, and a large amount of parallelism is a monumental hurtle for instances when the majority of that bus is in the high state. If that bus could be wider, I believe it already would be. The push for single threaded performance and the marketability of CPU speed is what drove the evolution of this CPU architecture for decades now.

    The market is focused on the most viable alternative of the maths coprocessor using a GPU, but this is a hack and is long term untenable. In the long term, data centers are not going to bear the overhead of a dual compute solution. Anyone that designs a new scalable processor that can handle both traditional code and matrix multiplication flexibly will win the ensuing market across the board. Any business that can handle both types of execution and scale to accommodate demand utilizing their entire available infrastructure will inevitably be much more efficient and with more profitability.

    What does this really mean, it means that as of a year and a half ago, the entire market changed at the foundational level. Hardware is very slow. It takes 10 years from initial idea to first consumer market availability. At best, that means that the current systems paradigm has ~8 years before total obsolescence. I am willing to bet the farm, no one will be using a CPU from the present, or a GPU as a math coprocessor for matrix math.

    How does this change the market? - nobody asked - It takes away the advantage of incumbency and establishment. It also takes away the security of iterative conservativism. Now all the sudden it is a massive liability to be iteratively conservative. Simply having the capital to pursue this new shift is a viable path to a competitive market share. There is absolutely no reason to hire ARM to do anything from scratch like this. It is far smarter to poach their engineers and from academia and use RISC-V without paying the royalty to an unnecessary middle man. ARM’s only selling point is the initial cost savings of a prepackaged design, but all of their IP blocks are still focused on single threaded code execution. In fact, ARM is at a major disadvantage due to the nature of reduced instructions as it will require a redesign to accommodate an AVX like instruction capable of loading matrix math quickly.

    The primary way Apple/ARM is handling AI workloads right now is through software optimisations. Sizing the tensors to the actual hardware massively improves the speed. This is simply software stuff. AI is moving too fast for this to work as a long term practice. Every model architecture needs to be tweaked, and the future will be very different. A flexible solution at the hardware level is required.

    American businesses have become extremely slow and conservative. There is no telling who is doing the next generation of dominant hardware right now. Judging by the clown show of how the Americans handled tooling up for EV’s, I expect the industry will pivot to Asia entirely. They have the foresight and stability to compete in this situation. The only question is really if ASML wants to stay relevant and sell to China, or if China has already found a replacement solution for EUV. I don’t think anyone will reveal a single detail of what they are working on in the present, but when everyone shows their hand, it is a truly open statistical game unlike any time in the last 30 years. I see no reason why the establishment has a fortified entrenchment in the market.

    I believe this will kill both ARM and x86 in a different future world, bond.


  • Thinkpads were the enterprise standard. They were well documented and had full spec implementations of software. This was the reputation that built the icon.

    I don’t trust anything from Google and especially anything with ARM. I’ll use Graphene, but only because of the TPM chip that can better prove what is happening in hardware when absolutely every mobile device made is a heap of shit hardware.

    With a computer I have better options. When I was younger and dumber, I thought Android was great because it was Linux. Since then I learned that the entire scheme of Android is a way for google to enable and manipulate an industry while stealing ownership of all consumer devices using orphaned kernels to depreciate devices.

    I learned my lesson. Everything google touches is a shitty scheme. Everything from the “free” stalkerware internet model that has completely undermined the third pillar of democracy (free press/freedom of information), the ownership over a part of me that is used to manipulate me, to the theft of my device itself; nothing google does is ever in your best interest. The only time it is worth buying google stuff is with an extremely well reasoned group like Graphene OS that have nothing to do with google and are not in any way funded by or associated with google.

    ARM is dead in the water. The writing is on the wall. The same last hoorah of hardware happened when Power PC, and the 68k Motorola stuff was about to die. The most important thing to know is how Apple actually works and has always worked when it is successful. Apple leverages sinking ship silicon with buying power, and next level software development to squeeze all the untapped potential out of the device. All of the bugs and issues are fairly well known and documented. This low grade trailing edge hardware is placed in a pretty dress and marketed to people that are clueless about actual hardware. These people are paying a premium. Their stuff works great and performs quite well for what it is, but nothing about it is cutting edge. Apple profits from selling old tech as a premium product. The 6502 was a hackjob that started the trend and it only existed because it was a third as much money as all other processors. That was its only real selling point. Their fab quality was so bad, MOS couldn’t compete with the speeds of their competition. They came up with the first dual instruction loading pipeline to try and get anywhere near the speeds of Zilog, Intel, and Motorola. This is how Apple started; with the 6502. The only architecture that Apple has ever used that has not already failed is x86. When Apple chooses ARM, that is the death knell. The true tell about ARM is how it was sold by the original Acorn group ownership right after RISC-V won the legal case against UC Berkeley for full independence. The entire business model of ARM is to keep everything proprietary. This is a key player in theft of ownership and the dystopianism of the present neo feudal digital age. This is the polar opposite of the original legacy of the IBM Thinkpad. The present hardware with Thinkpad stickers doesn’t come close to that original legacy in any way. The world is more complicated now. But, we have tools like Linux hardware probe to find what works and what doesn’t, and distros like Fedora just work.