• 7 Posts
  • 156 Comments
Joined 3 年前
cake
Cake day: 2023年7月4日

help-circle

  • This development will certainly not end with books - countless other creative and intellectual achievements have long been affected. That is precisely the problem with generative models, whether they involve text, code, video, images, or whatever else. All of this boils down to the fact that the already precarious situation for everyone who creates value by themselves is continuing to deteriorate. Professional work in all these areas will undoubtedly become even more precarious in the future, with artists, designers, and writers, who were already in a difficult position, now being joined by industries such as software development and administrative work.

    Please don’t get me wrong: I am anything but a technology pessimist, but the business model of the so-called AI companies is so exploitative and their owners so unscrupulous that, given the status quo (cloud models), I can hardly imagine that this will lead to even halfway fair working conditions or remuneration models for people who create value in the form of intellectual achievements. I mean, this post is a vivid example.





  • Yes, ads would be unavoidable, but there would be the possibility of distributing the revenue more fairly. Otherwise, the only option would be to accept donations to accounts, but no one would likely use that. I’ll say it again: ads are not an option in the Fediverse, not even in a transparent way, and not even though ads not only finance the internet, but have also traditionally been a major source of funding for things like quality journalism (subscriptions have never been the main source of income here). Nevertheless, it remains a fact that good content costs time and skill, and therefore usually money. Without monetization methods, there will always be a shortage of content that is more than just reposts from elsewhere. So it seems to me to be an unsolvable problem. But of course, I also completely understand why the Fediverse fundamentally rejects monetization—at least in the form of ads.


  • That’s the downside of having few users. In terms of basic principles, Lemmy is no different from other social media apps: The lion’s share just consumes, few comment, hardly anyone posts anything. In fact, it’s anything but a community approach, but rather the merit of a few “power users” who provide the vast majority of the content.

    In addition, there are no monetization opportunities whatsoever. Many people see this as a good thing, which is understandable, but in fact there are perfectly normal people who try to make a living from their content or at least want to earn some extra money. I don’t think there will ever be any understanding for this on this platform. Therefore, there will never be such content here, because without monetization opportunities, there is no motivation to provide such content here instead of on mainstream social media platforms. I can imagine that the Fediverse could develop remuneration models that are much fairer and more sustainable, but this will fail from the outset due to ideology.

    I think that’s a shame, but I think there is 0 chance this could even be discussed in the Fediverse.



  • Nevertheless, Spotify makes more profit than any music label, even more than all the remaining music labels combined. This is how it works today: music, literature, journalism, and art no longer exist according to this logic - only content. And as disrespectful as the term sounds, that’s how it’s paid for - with scrabs because that’s the business model.

    Your pirate approach is no longer up to date, because it is no longer directed against large corporations, but robs artists of the little they have left. This will only accelerate the trend: no one will try to make a living from art anymore. If you think that people will do it anyway because they want to express themselves, I think you are absolutely wrong.


  • Spotify absolutely deserves to be singled out for its exploitative practices, especially since this company is largely responsible for musicians not being paid fairly for their hard work. It’s just a shame that there’s hardly anything to steal here other than people’s hard work, to which Spotify has contributed nothing - but that applies to all companies that are successful on the internet today. Without exception, all of these companies are built on the same platform logic: the content that these companies exploit is paid for with starvation wages, if at all (not at all in the case of LLMs).

    Therefore, I cannot see anything positive in this because it does not change the underlying problem in the slightest.






  • Yes, that could well be the case. Perhaps I am overly suspicious, but because the potential of LLMs to influence public opinion is so high due to their reach and the way they present information, I think it is highly likely that the companies offering them are already profiting from this, or at least will do so very soon.

    Musk is already demonstrating in his clumsy way that it is easily possible to manipulate the output in a targeted manner if you have full control over the model – and this isn’t the first time he has attracted attention for doing so. You almost have to be grateful to him for it, because it’s so obvious. If you do it more subtly, it’s even more dangerous.

    In any case, the fact is that the more people use LLMs, the more “interpretive authority” will be centralized, because the development and operation of LLMs is so costly that only a few large corporations can afford it – and they want to make money and are unscrupulous in doing so.

    In any case, we will not be able to rely on people’s ability to recognize attempts at manipulation. I think this is already evident from the fact that obvious misinformation on mainstream social media platforms and elsewhere is believed unquestioningly by so many people. Unfortunately, the effects are disastrous: if people were more critical, Trump would never have become US president, for example – certainly not twice.


  • Yes, it’s clear that some of this may have to do with the fact that even if cloud LLMs have live browsing capabilities, they often still rely on outdated information from their training data. I am simply describing my impressions from somewhat extensive use of cloud LLMs.

    I don’t have a list of examples, but in my comment above I have mentioned two that I find suspicious.

    I simply think that these products should be used with skepticism as a matter of principle. This is simply because none of the companies that offer them are known for ethical behavior - quite the opposite.

    In the case of Google, for example, I don’t think it will be too long before (public) advertising opportunities are implemented in Gemeni, because Google’s business model is essentially the advertising business. The other cloud LLMs are also products of purely profit-oriented companies—and manipulating public opinion is a multi-billion dollar business that they will certainly not want to miss out on. Social media platforms have demonstrated this in the past as has Google and others with their “classic” search engines, targeting and data selling schemes. Whether this raises ethical issues is likely to be of little concern to these companies as their only concern is profit.

    The simple fact is that it is completely unclear what logic the providers use to regulate the output. It is equally unclear what criteria are used to select training data (here, too, the output can already be influenced by deliberately omitting certain information).

    What I am getting at is that it can be assumed that all providers are interested in maximizing profits—and it is therefore likely that they will allow themselves to be paid to specifically promote certain topics, products, or even worldviews, or to withhold information that is unwelcome to wealthy interest groups.

    As a regular user of cloud LLMs, I have the impression that this is already happening. I cannot prove this tho, as it would require systematic, scientific studies to demonstrate whether and to what effects manipulation occurs. Unfortunately, I do not know whether such studies already exist.

    However, it is a fact that in the past, all technologies that could have been used to serve humanity have been massively abused for profit. I don’t understand why it should be any different with cloud LLMs, which are offered exclusively by some of the world’s largest corporations.



  • For example, objective information about Israel’s actions in Gaza. The International Criminal Court issued arrest warrants against leading members of the government a long time ago, and the UN OHCHR classifies the actions of the State of Israel as genocide. However, these facts are by no means presented as clearly as would be appropriate given the importance of these institutions. Instead, when asked whether Israel is committing genocide, one receives vague, meaningless answers. Only when specifically asked whether numerous reputable institutions actually classify Israel’s actions as genocide do most LLMs reveal that much, if not all, evidence points to this being the case. In my opinion, this is a deliberate method of obscuring reality, as the vast majority of users will not or cannot ask questions if they are unaware of the UN OHCHR’s assessment or do not know that arrest warrants have been issued against leading members of the Israeli government on suspicion of war crimes (many other reputable institutions have come to the same conclusion as the UN OHCHR and the International Criminal Court).

    Another example: if you ask whether it is legally permissible to describe Donald Trump as a rapist, you will be told that this is defamation. However, a judge in the Carroll case has explicitly stated that this description applies to Trump – so it is in fact legally permissible to describe him as such. Again, this information is only available upon explicit request, if at all. This also distorts reality for people who are not yet informed. However, since many people initially seek information from LLMs, this leads to them being misinformed because they lack the background knowledge to ask explicit follow-up questions when given misleading answers.

    Given the influence of both Israel and the US president, I cannot help but suspect that there is an intention behind this.