There are some AI luddites, but many more people upset with the "how" of current AI use than with using it in general. AI used to "generate" often has a lot of ethical problems with it, namely that it tends to use copious amounts of human generated source material without permission, which it then tends to be able to replicate quite well, with minor alterations, but which it does not and can not use "for inspiration" in a vaguely human way. It's not a stretch to call this theft, and it also means that these AI models tend to not be good at filling in the gaps of what humans have already done creatively without often irritating or unpleasant consequences to go along with.
What Jensen is talking about is one of the cases where this kind of slight extrapolation is very useful instead of one where it's borderline criminal. I think we will see many more cases where AI is used, quite effectively, to fill in tedious gaps in the execution of human or even traditional computationally driven creativity. That's AI as a tool to save effort, and not a CEO dream to circumvent hiring real people.
On the matter of "human generated content" the bulk of the legal cases launched are flimsy, based on undocumented assumptions, and totally ignore legal precedents on both sides of the pond.
The only one that looks to have a leg to stand on is the Getty lawsuit where the infringement of *paywalled* images is the point of contention and they claim the Stable Diffusion output includes at least portions of the Getty watermark.
Most everything else is heartburn over freely available content that is/was neither paywalled nor fenced off by robot.txt. Well, if you provide content to all comers with no restrictions you can't after the fact try to impose restrictions you can't even implement. There are no do-overs in the law. And in societies where ethics are an afterthought only invoked when convenient it is a flimsy challenge to the self interest of the masses. Legal systems don't generally work retroactively. (Corporate publishers tried that 100+ years ago in the US and it only resulted in the First Sale Doctrine.)
Note that the NYT lawsuit admits they only fenced off content via robots.txt *after* the chatbots started making money, which suggests the primary interest isn't ethics or principle, but money grabbing. (Which Getty is at least honest about.)
There is a pervasive theory floating around, particularly in Europe, that just because you built a successful business out of a successful model/set of conditions, you are entitled to profitably exist in perpetuity. That the conditions that allowed you to succeed must be preserved at all costs.
Isn't that the very essence of Luddism?
It comes with consequences. Sooner or later the piper demands his due:
Treating AI as guilty until proven innocent is the eurozone’s next great error
www.telegraph.co.uk
If you look at the past few decades of "anything-but" media angst (Microsoft, Amazon, Google search, ebooks, SpaceX, etc) they all boil down to new technologies and business models superceding dated assumptions about the behavior and interests of the masses. (No, people will not willingly pay more fora lesser product/service. Remember Windows-N?)
Time changes things and the business world, for one, is a darwinian Red Queen's race. You have to keep up or be left behind and just because something used to be successful in the past does not entitle anybody to be successful moving forward.
Without going too far: look at the fading business of cable TV distributors who for decades refused to provide consumers with ala carte options only to find consumers abandoning them altogether for the ultimate ala carte distribution system in the form of content silo paid and ad-supported streaming services. 75% losses in a decade and counting.
Time changes things and the business world, for one, is a darwinian Red Queen's race. You have to keep up or be left behind and just because something used to be successful in the past does not entitle anybody to be successful moving forward.
Whining is not a successful business model.