

I mean yeah, it exists, but have you tried using calligra for anything productive? It is missing so many basic features and has lots of annoying bugs.
I mean yeah, it exists, but have you tried using calligra for anything productive? It is missing so many basic features and has lots of annoying bugs.
It’s entirely web-based (their desktop app uses electron). It is quite good, has no problems with editing Microsofts shitty formats and offers a feature set on the level of office 356 web.
Downside: made by a Russian company which has since re-incorporated in Singapore I think.
Just use the newest driver and you’ll be completely fine. Even with very recent hardware, everything works as expected for me.
People like to shit on Nvidia, which is deserved for their business practices and relationship with Linux in the past, however most who claim that there are issues clearly haven’t used an Nvidia GPU under Linux in a long time.
It just works.
Photogrammetry is very computationally expensive, I don’t think current phones have what it takes to do it in an appropriate amount of time.
On PC, COLMAP is the OG suite, its data format is widely used even outside of COLMAP itself, for example in gaussian splatting or NeRf.
It’s FOSS of course.
I answered in another comment:
There seem to be conflicting opinions on the matter:
https://netzpolitik.org/2024/pay-or-okay-privatsphaere-nur-gegen-gebuehr/
https://www.etes.de/blog/pay-or-okay-pur-abo-modell-zulaessig/
In any case, the requirements for “pay or okay” being legal are: (translated with deepl)
“In principle, the tracking of user behavior can be based on consent if a tracking-free model is offered as an alternative, even if this is subject to payment. However, the service that users receive in a paid model must firstly represent an equivalent alternative to the service that they obtain through consent. Secondly, the consent must meet all the conditions for effectiveness set out in the General Data Protection Regulation (GDPR), i.e. in particular the requirements listed in Art. 4 No. 11 and Art. 7 GDPR. Whether the payment option - e.g. a monthly subscription - is to be regarded as an equivalent alternative to consent to tracking depends in particular on whether users are given equivalent access to the same service in return for a standard market fee. Equivalent access generally exists if the offers include the same service, at least in principle.”
If a user opts for the subscription option, only storage and readout processes that are technically absolutely necessary may take place (Section 25 (1) TTDSG). Furthermore, the permissions under Art. 6 para. 1 GDPR must be complied with.
“If there are several processing purposes that differ significantly from one another, the requirements for voluntariness must be met to the effect that consent can be granted on a granular basis. This means, among other things, that users must be able to actively select the individual purposes for which consent is to be obtained (opt-in). Only if purposes are very closely related can a bundling of purposes be considered. A blanket overall consent for different purposes in this respect cannot be effectively granted.”
In addition, the consents must meet the other requirements of the GDPR. This applies in particular to the principle of transparency, comprehensibility and compliance with information obligations.
As I see it, at the very least the granularity requirement is not fulfilled in these cases.
No it’s not. For some reason, most of the larger German publications do this, as of now apparently they haven’t been sued.
I guess it comes down to whether it’s legal to train image generation models on copyrighted material. Midjourney etc can’t produce a very accurate image of copyrighted characters if those characters aren’t in the training set.
Good one
With discord already being pretty shitty, I am interested in what ideas they are coming up with.
Ublock and Sponsorblock make YouTube bearable.
If you’re on mobile, use tubular - it has AdBlock as well as Sponsorblock integrated.
It’s not a network file system. It’s a regular file system for hard drives, SSDs and such, which is used by default on Windows since Windows NT (that’s where the NT comes from - it doesn’t stand for network but “new technology”).
The implementation in Windows is closed source meaning the file system had to be reverse engineered to even work at all under Linux. Support nowadays is okay-ish, but as soon as you don’t properly shutdown your computer or use the file system under Windows, you will run into weird problems.
Also it just straight up doesn’t work for most games running under wine.
Couch distance and especially screen size can vary a lot. I can clearly see the difference between full HD and whatever resolution DVDs have at the 2~3 meter distance at my parents’. (43" full HD screen). Same goes for 4K vs full HD on my 60" screen.
In any case, my main point was that DVDs are no viable alternative to streaming services since all of them offer much better quality. If you really want to replace streaming services at similar or better quality, go for Blu rays.
Consistent font, text readable, pixel perfect consistency on close / maximize / minimize buttons. Definitely not (completely) AI-generated.
DVDs have atrocious quality. Blu rays are where it’s at
I work in this field. In my company, we use smaller, specialized models all the time. Ignore the VC hype bubble.
Funnily enough, this is also my field, though I am not at uni anymore since I now work in this area. I agree that current literature rightfully makes no claims of AGI.
Calling transformer models (also definitely not the only type of LLM that is feasible - mamba, Llada, … exist!) “fancy autocomplete” is very disingenuous in my view. Also, the current boom of AI includes way more than the flashy language models that the general population directly interacts with, as you surely know. And whether a model is able to “generalize” depends on whether you mean within its objective boundaries or outside of them, I would say.
I agree that a training objective of predicting the next token in a sequence probably won’t be enough to achieve generalized intelligence. However, modelling language is the first and most important step on that path since us humans use language to abstract and represent problems.
Looking at the current pace of development, I wouldn’t be so pessimistic, though I won’t make claims as to when we will reach AGI. While there may not be a complete theoretical framework for AGI, I believe it will be achieved in a similar way as current systems are, being developed first and explained after.
In the case of reasoning models, definitely. Reasoning datasets weren’t even a thing a year ago and from what we know about how the larger models are trained, most task-specific training data is artificial (oftentimes a small amount is human-generated and then synthetically augmented).
However, I think it’s safe to assume that this has been the case for regular chat models as well - the self-instruct and ORCA papers are quite old already.
The goalpost has shifted a lot in the past few years, but in the broader and even narrower definition, current language models are precisely what was meant by AI and generally fall into that category of computer program. They aren’t broad / general AI, but definitely narrow / weak AI systems.
I get that it’s trendy to shit on LLMs, often for good reason, but that should not mean we just redefine terms because some system doesn’t fit our idealized under-informed definition of a technical term.
Not possible, widevine L1 needs hardware-level DRM which depends on manufacturer support. So unless there are actual TVs / set top boxes being shipped with Plasma Bigscreen, we’re SOL 🏴☠