Not in every way. They’re cheaper and faster.
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit and then some time on kbin.social.
Not in every way. They’re cheaper and faster.
That’s not what they’re arguing, not even close.
And unfortunately, this article is also just a response to media clickbait, not a discussion point it tries to look like
And becomes new clickbait in the process.
Looking forward to the “Waymo robotaxis become silent killers stalking the night” headlines once the fix is implemented.
I run tabletop roleplaying adventures and LLMs have proven to be great “brainstorming buddies” when planning them out. I bounce ideas back and forth, flesh them out collaboratively, and have the LLM speak “in character” to give me ideas for what the NPCs would do.
They’re not quite up to running the adventure themselves yet, but it’s an awesome support tool.
It’s impossible to run an AI company “ethically” because “ethics” are such a wibbly-wobbly and subjective thing, and because there are people who simply wish to use it as a weapon on one side of a debate or the other. I’ve seen goalposts shift around quite a lot in arguments over “ethical” AI.
not some fucking investors and shareholders that probably kept pressuring CS for the last several years to reduce costs and increase revenue,
This is presumably part of what would be at issue in court. The shareholders are claiming they were lied to. We’ll see how that holds up.
CrowdStrike (CRWD.O), has been sued by shareholders who said the cybersecurity company defrauded them by concealing how its inadequate software testing could cause the July 19 global outage that crashed more than 8 million computers.
In a proposed class action filed on Tuesday night in the Austin, Texas federal court, shareholders said they learned that CrowdStrike’s assurances about its technology were materially false and misleading when a flawed software update disrupted airlines, banks, hospitals and emergency lines around the world.
Basically, the company advertised itself as being one way to the shareholders, they bought in on that basis, and then it turned out they were misrepresenting themselves. Presumably they’re suing the company and not the executives personally because that’s where the money is.
Note that simply owning the shares doesn’t mean that it’s already “their money.” If I buy a share in a company I can’t walk up to it and demand that they give me a portion of the cash from the register. It’s more complicated than that and lawsuits like this are part of that complexity.
That would depend entirely on why OpenAI might go under. The linked article is very sparse on details, but it says:
These expenses alone stack miles ahead of its rivals’ expenditure predictions for 2024.
Which suggests this is likely an OpenAI problem and not an AI in general problem. If OpenAI goes under the rest of the market may actually surge as they devour OpenAI’s abandoned market share.
AI engineers are not a unitary group with opinions all aligned. Some of them really like money too. Or just want to build something that changes the world.
I don’t know of a specific “when” where a bunch of engineers left OpenAI all at once. I’ve just seen a lot of articles over the past year with some variation of “<company> is a startup founded by former OpenAI engineers.” There might have been a surge when Altman was briefly ousted, but that was brief enough that I wouldn’t expect a visible spike on the graph.
We are talking specifically about OpenAI, though.
Well, my point is that it’s already largely irrelevant what they do. Many of their talented engineers have moved on to other companies, some new startups and some already-established ones. The interesting new models and products are not being produced by OpenAI so much any more.
I wouldn’t be surprised if “safety alignment” is one of the reasons, too. There are a lot of folks in tech who really just want to build neat things and it feels oppressive to be in a company that’s likely to lock away the things they build if they turn out to be too neat.
OpenAI is no longer the cutting edge of AI these days, IMO. It’ll be fine if they close down. They blazed the trail, set the AI revolution in motion, but now lots of other companies have picked it up and are doing better at it than them.
They don’t use GPUs, they use more specialized devices like the H100.
A surprising number of “file formats” these days are really just zip files with a standard for the filenames and folders contained within. There’s likely a ton of wonderful secrets like these to be found in the collective dataspace of humanity.
Also Library Genesis.
The IA is appealing the decision so they’re not out of the woods just yet.
But when you die and an AI company contacts all your grieving friends and family to offer them access to an AI based on you (for a low, low fee!)
You can stop right there, you’re just imagining a scenario that suits your prejudices. Of all the applications for AI that I can imagine that would be better served by a model that is entirely under my control this would be the top of the list.
With that out of the way the rest of your rhetorical questions are moot.
It’s often not a choice between an AI-generated summary and a human-generated one, though. It’s a choice between an AI-generated summary and no summary.