

Why pay for anything ever if it’s going to potentially get taken away?
Because it’s called “lifetime”? As in the entire point of the product is that it will not ever be taken away with the exception that you close your account? “Why pay for anything if there’s nothing enforcing the core premise of the product?” The gardener advertised a “whole-yard mow” for $100, but I’ve already gotten the area around the driveway, and honestly would it really be that bad if they just stopped right now?
You can talk about odds all you want (although I think around $100 million in VC funding puts those odds squarely in favor of “lifetime” users getting the floor sawed out from under them Looney Tunes-style), but the fact it’s even possible is what’s deeply disturbing, because it’s deliberate. Lifetime’s meaning should be unambiguously stipulated in a contract, not inferred. Know why? Because companies out there advertising “lifetime” subscriptions right now have little disclaimers like “approximately five years or so but honestly we don’t really know or care lol this license disappears whenever we want it to”).
People are assuming it’s for the lifetime of your Plex account, but my response is: based on fucking what? Plex on their website doesn’t seem to specify this anywhere, even in their terms of service. People asking on their official forums receive responses saying things like “probably for the lifetime of your Plex account” with no sources to anything. I’m not trying to sealion here; I literally can’t find a single instance of Plex stating officially in writing or verbally what “lifetime” actually means to the end user. If Plex isn’t going to rugpull, why can’t they add a couple sentences to their TOS saying something like: “The purchase of a lifetime pass grants the user a non-transferable license for [blah blah] starting from the date of purchase. This license will not be revoked unless 1) the associated account is terminated by the account holder or 2) the aasociated account is terminated by Plex for one or more of the reasons outlined in section [blah]”?
They could, they should, they don’t, and you have no good explanation, otherwise you would’ve offered one by now. They have enough money to afford a legal team that wouldn’t overlook that. The answer is that they want to reserve the right to destroy the “lifetime” pass whenever they want. If you can find official documentation from Plex Inc. saying that if I buy a lifetime pass today for $250, the license will only end with the termination of the account, then I’ll have no idea why they make this too hard to find, but I’ll take back everything else I said in this comment and stop using “lifetime” in scare quotes. I genuinely want to know if they say anything about this anywhere.
This is entirely correct, and it’s deeply troubling seeing the general public use LLMs for confirmation bias because they don’t understand anything about them. It’s not “accidentally confessing” like the other reply to your comment is suggesting. An LLM is just designed to process language, and by nature of the fact it’s trained on the largest datasets in history, practically there’s no way to know where this individual output came from if you can’t directly verify it yourself.
Information you prompt it with is tokenized, run through a transformer model whose hundreds of billions or even trillions of parameters were adjusted according to god only knows how many petabytes of text data (weighted and sanitized however the trainers decided), and then detokenized and printed to the screen. There’s no “thinking” involved here, but if we anthropomorphize it like that, then there could be any number of things: it “thinks” that’s what you want to hear; it “thinks” that based on the mountains of text data it’s been trained on calling Musk racist, etc. You’re talking to a faceless amalgam unslakably feeding on unfathomable quantities of information with minimal scrutiny and literally no possible way to enforce quality beyond bare-bones manual constraints.
There are ways to exploit LLMs to reveal sensitive information, yes, but you have to then confirm that sensitive information is true, because you’ve just sent data into a black box and gotten something out. You can get a GPT to solve the sudoku puzzle, but you can’t then parade that around before you’ve checked to make sure the puzzle is correct. You cannot ever, under literally any circumstance, trust anything a generative AI creates for factual accuracy; at best, you can use it as a shortcut to an answer which you can attempt to verify.