• 1 Post
  • 52 Comments
Joined 1 year ago
cake
Cake day: May 8th, 2023

help-circle

  • The argument seem most commonly from people on fediverse (which I happen to agree with) is really not about what current copyright laws and treaties say / how they should be interpreted, but how people view things should be (even if it requires changing laws to make it that way).

    And it fundamentally comes down to economics - the study of how resources should be distributed. Apart from oligarchs and the wannabe oligarchs who serve as useful idiots for the real oligarchs, pretty much everyone wants a relatively fair and equal distribution of wealth amongst the people (differing between left and right in opinion on exactly how equal things should be, but there is still some common ground). Hardly anyone really wants serfdom or similar where all the wealth and power is concentrated in the hands of a few (obviously it’s a spectrum of how concentrated, but very few people want the extreme position to the right).

    Depending on how things go, AI technologies have the power to serve humanity and lift everyone up equally if they are widely distributed, removing barriers and breaking existing ‘moats’ that let a few oligarchs hoard a lot of resources. Or it could go the other way - oligarchs are the only ones that have access to the state of the art model weights, and use this to undercut whatever they want in the economy until they own everything and everyone else rents everything from them on their terms.

    The first scenario is a utopia scenario, and the second is a dystopia, and the way AI is regulated is the fork in the road between the two. So of course people are going to want to cheer for regulation that steers towards the utopia.

    That means things like:

    • Fighting back when the oligarchs try to talk about ‘AI Safety’ meaning that there should be no Open Source models, and that they should tightly control how and for what the models can be used. The biggest AI Safety issue is that we end up in a dystopian AI-fueled serfdom, and FLOSS models and freedom for the common people to use them actually helps to reduce the chances of this outcome.
    • Not allowing ‘AI washing’ where oligarchs can take humanities collective work, put it through an algorithm, and produce a competing thing that they control - unless everyone has equal access to it. One policy that would work for this would be that if you create a model based on other people’s work, and want to use that model for a commercial purpose, then you must publicly release the model and model weights. That would be a fair trade-off for letting them use that information for training purposes.

    Fundamentally, all of this is just exacerbating cracks in the copyright system as a policy. I personally think that a better system would look like this:

    • Everyone gets a Universal Basic Income paid, and every organisation and individual making profit pays taxes in to fund the UBI (in proportion to their profits).
    • All forms of intellectual property rights (except trademarks) are abolished - copyright, patents, and trade secrets are no longer enforced by the law. The UBI replaces it as compensation to creators.
    • It is illegal to discriminate against someone for publicly disclosing a work they have access to, as long as they didn’t accept valuable consideration to make that disclosure. So for example, if an OpenAI employee publicly released the model weights for one of OpenAI’s models without permission from anyone, it would be illegal for OpenAI to demote / fire / refuse to promote / pay them differently on that basis, and for any other company to factor that into their hiring decision. There would be exceptions for personally identifiable information (e.g. you can’t release the client list or photos of real people without consequences), and disclosure would have to be public (i.e. not just to a competitor, it has to be to everyone) and uncompensated (i.e. you can’t take money from a competitor to release particular information).

    If we had that policy, I’d be okay for AI companies to be slurping up everything and training model weights.

    However, with the current policies, it is pushing us towards the dystopic path where AI companies take what they want and never give anything back.


  • There’s a lot one side of the market can do with collusion to drive up prices.

    For example, the housing supply available on the market right now often fluctuates over time - once someone has a long term tenancy, their price is locked in until the next legal opportunity to change the price. If there is a low season - fewer people living in an area, higher vacancy rate - in a competitive market, tenants often want a long term contract (if they are planning on staying), and the landlord who can offer that will win the contract. The landlord gets income during the low period, but forgo a higher rent they might get during a high period. Another landlord who tries to take a hardline policy of insisting tenants renew their contract during the high period would lose out - they would just miss out on rent entirely during the low period, likely making less than the other landlord.

    Now if the landlords form a cartel during the low period, and it is not possible to lease a rental property long term, then the tenants have no choice but to be in a position to re-negotiate price during the high period. Landlords avoid having to choose between no revenue during the low period and higher during the high period, or a consistent lower rent - instead they get the lower rent during the low period (at a slightly lower occupancy rate, but shared across all the landlords) AND the higher rent during the high period.




  • It could also go the other way and someone could sue Google or other companies. Web browsers and ad blockers run on the client not the server, generally with the authorisation of the owner of said client system. It is a technical measure to prevent unauthorised code (i.e. unwanted ads) from running on the system, imposed by the owner of the system. Anti ad blocker tech is really an attempt to run software on someone’s computer by circumventing measures the owner of said computer has deployed to prevent that software from running, and has not authorised it to run. That sounds very similar to the definition of computer fraud / abuse / unauthorised access to a computer system / illegal hacking in many jurisdictions.



  • Maybe technically in Florida and Texas, given that they passed a law to try to stop sites deplatforming Trump.

    https://www.scstatehouse.gov/sess125_2023-2024/bills/3102.htm

    “The owner or operator of a social media website who contracts with a social media website user in this State is subject to a private right of action by a user if the social media website purposely: … (2) uses an algorithm to disfavor, shadowban, or censure the user’s religious speech or political speech”.

    In May 2022, the US Court of Appeals for the 11th Circuit ruled to strike the law (and similarly there was a 5th Circuit judgement), but just this month the US Supreme Court vacated the Court of Appeals judgement (i.e. reinstated the law) and remanded it back to the respective Court of Appeals. That said, the grounds for doing that were the court had not done the proper analysis, and after they do that it might be struck down again. But for now, the laws are technically not struck down.

    It would be ironic if after conservatives passed this law, and stacked the supreme court and got the challenge to it vacated, the first major use of it was used against Xitter for censoring Harris!



  • Would you say its unfair to base pricing on any attribute of your customer/customer base?

    A business being in a position to be able to implement differential pricing (at least beyond how they divide up their fixed costs) is a sign that something is unfair. The unfairness is not how they implement differential pricing, but that they can do it at all and still have customers.

    YouTube can implement differential pricing because there is a power imbalance between them and consumers - if the consumers want access to a lot of content provided by people other than YouTube through YouTube, YouTube is in a position to say ‘take it or leave it’ about their prices, and consumers do not have another reasonable choice.

    The reason they have this imbalance of market power and can implement differential pricing is because there are significant barriers to entry to compete with YouTube, preventing the emergence of a field of competitors. If anyone on the Internet could easily spin up a clone of YouTube, and charge lower prices for the equivalent service, competitors would pop up and undercut YouTube on pricing.

    The biggest barrier is network effects - YouTube has the most users because they have the most content. They have the most content because people only upload it to them because they have the most users. So this becomes a cycle that helps YouTube and hinders competitors.

    This is a classic case where regulators should step in. Imagine if large video providers were required to federated uploaded content on ActivityPub, and anyone could set up their own YouTube competitor with all the content. The price of the cheapest YouTube clones (which would have all the same content as YouTube) would quickly drop, and no one would have a reason to use YouTube.


  • would not be surprised if regional pricing is pretty much just above the break even mark

    And in the efficient market, that’s how much the service would cost for everyone, because otherwise I could just go to a competitor of YouTube for less, and YouTube would have to lower their pricing to get customers, and so on until no one can lose their prices without losing money.

    Unfortunately, efficient markets are just a neoliberal fantasy. In real life, there are network effects - YouTube has people uploading videos to it because it has the most viewers, and it has the most viewers because it has the most videos. It’s practically impossible for anyone to compete with them effectively because of this, and this is why they can put their prices in some regions up to get more profit. The proper solution is for regulators to step in and require things like data portability (e.g. requiring monopolists to publish videos they receive over open standards like ActivityPub), but regulatory capture makes that unlikely. In a just world, this would happen and their pricing would be close to the costs of running the platform.

    So the people paying higher regional prices are paying money in a just world they shouldn’t have to pay, while those using VPNs to pay less are paying an amount closer to what it should be in a just world. That makes the VPN users people mitigating Google’s abuse, not abusers.


  • Yes, but for companies like Google, the vast majority of systems administration and SRE work is done over the Internet from wherever staff are, not by someone locally (excluding things like physical rack installation or pulling fibre, which is a minority of total effort). And generally the costs of bandwidth and installing hardware is higher in places with a smaller tech industry. For example, when Google on-sells their compute services through GCP (which are likely proportional to costs) they charge about 20% more for an n1-highcpu-2 instance in Mumbai than in Oregon, US.


  • that’s abuse of regional pricing

    More like regional pricing is an attempt to maximise value extraction from consumers to best exploit their near monopoly. The abuse is by Google, and savvy consumers are working around the abuse, and then getting hit by more abuse from Google.

    Regional pricing is done as a way to create differential pricing - all businesses dream of extracting more money from wealthy customers, while still being able to make a profit on less wealthy ones rather than driving them away with high prices. They find various ways to differentiate between wealthy and less wealthy (for example, if you come from a country with a higher average income, if you are using a User-Agent or fingerprint as coming from an expensive phone, and so on), and charge the wealthy more.

    However, you can be assured that they are charging the people they’ve identified as less wealthy (e.g. in a low average income region) more than their marginal cost. Since YouTube is primarily going to be driven by marginal rather than fixed costs (it is very bandwidth and server heavy), and there is no reason to expect users in high-income locations cost YouTube more, it is a safe assumption that the gap between the regional prices is all extra profit.

    High profits are a result of lack of competition - in a competitive market, they wouldn’t exist.

    So all this comes full circle to Google exploiting a non-competitive market.


  • they have ran out of VC money

    You know YouTube is owned by Google, not VC firms right?

    Big companies sometimes keep a division / subsidiary less profitable for a time for a strategic reason, and then tighten the screws.

    They generally only do this if they believe it will eventually be profitable over the long term (or support another part of the strategy so it is profitable overall). Otherwise they would have sold / shut it down earlier - the plan is always going to be to profitable.

    However, while an unprofitable business always means either a plan to tighten screws, or to sell it / shut it down, tightening screws doesn’t mean it is unprofitable. They always want to be more profitable, even if they already are.


  • A1kmm@lemmy.amxl.comtoAsklemmy@lemmy.mlAre you a 'tankie'
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    3 months ago

    No

    On economic policy I am quite far left - I support a low Gini coefficient, achieved through a mixed economy, but with state provided options (with no ‘think of the businesses’ pricing strategy) for the essentials and state owned options for natural monopolies / utilities / media.

    But on social policy, I support social liberties and democracy. I believe the government should intervene, with force if needed, to protect the rights of others from interference by others (including rights to bodily safety and autonomy, not to be discriminated against, the right to a clean and healthy environment, and the right not to be exploited or misled by profiteers) and to redistribute wealth from those with a surplus to those in need / to fund the legitimate functions of the state. Outside of that, people should have social and political liberties.

    I consider being a ‘tankie’ to require both the leftist aspect (✅) and the authoritarian aspect (❌), so I don’t meet the definition.



  • I think any prediction based on a ‘singularity’ neglects to consider the physical limitations, and just how long the journey towards significant amounts of AGI would be.

    The human brain has an estimated 100 trillion neuronal connections - so probably a good order of magnitude estimation for the parameter count of an AGI model.

    If we consider a current GPU, e.g. the 12 GB GFX 3060, it can hold about 24 billion parameters at 4 bit quantisation (in reality a fair few less), and uses 180 W of power. So that means an AGI might use 750 kW of power to operate. A super-intelligent machine might use more. That is a farm of 2500 300W solar panels, while the sun is shining, just for the equivalent of one person.

    Now to pose a real threat against the billions of humans, you’d need more than one person’s worth of intelligence. Maybe an army equivalent to 1,000 people, powered by 8,333,333 GPUs and 2,500,000 solar panels.

    That is not going to materialise out of the air too quickly.

    In practice, as we get closer to an AGI or ASI, there will be multiple separate deployments of similar sizes (within an order of magnitude), and they won’t be aligned to each other - some systems will be adversaries of any system executing a plan to destroy humanity, and will be aligned to protect against harm (AI technologies are already widely used for threat analysis). So you’d have a bunch of malicious systems, and a bunch of defender systems, going head to head.

    The real AI risks, which I think many of the people ranting about singularities want to obscure, are:

    • An oligopoly of companies get dominance over the AI space, and perpetuates a ‘rich get richer’ cycle, accumulating wealth and power to the detriment of society. OpenAI, Microsoft, Google and AWS are probably all battling for that. Open models is the way to battle that.
    • People can no longer trust their eyes when it comes to media; existing problems of fake news, deepfakes, and so on become so severe that they undermine any sense of truth. That might fundamentally shift society, but I think we’ll adjust.
    • Doing bad stuff becomes easier. That might be scamming, but at the more extreme end it might be designing weapons of mass destruction. On the positive side, AI can help defenders too.
    • Poor quality AI might be relied on to make decisions that affect people’s lives. Best handled through the same regulatory approaches that prevent companies and governments doing the same with simple flow charts / scripts.

  • I think the most striking thing is that for outsiders (i.e. non repo members) the acceptance rates for gendered are lower by a large and significant amount compared to non-gendered, regardless of the gender on Google+.

    The definition of gendered basically means including the name or photo. In other words, putting your name and/or photo as your GitHub username is significantly correlated with decreased chances of a PR being merged as an outsider.

    I suspect this definition of gendered also correlates heavily with other forms of discrimination. For example, name or photo likely also reveals ethnicity or skin colour in many cases. So an alternative hypothesis is that there is racism at play in deciding which PRs people, on average, accept. This would be a significant confounding factor with gender if the gender split of Open Source contributors is different by skin colour or ethnicity (which is plausible if there are different gender roles in different nations, and obviously different percentages of skin colour / ethnicity in different nations).

    To really prove this is a gender effect they could do an experiment: assign participants to submit PRs either as a gendered or non-gendered profile, and measure the results. If that is too hard, an alternative for future research might be to at least try harder to compensate for confounding effects.




  • The government just has to print for the money, and use it for that

    Printing money means taxing those that have cash or assets valued directly in the units of the currency being measured. Those who mostly hold other assets (say, for example, the means of production, or land / buildings, or indirect equivalents of those, such as stock) are unaffected. This makes printing money a tax that disproportionately affects the poor.

    What the government really needs to do is tax the rich. Many top one percenters of income fight that, and unfortunately despite the democratic principle of one person, one vote, in practice the one percenters find ways to capture the government in many countries (through their lobbying access, control of the media, exploitation of weaknesses of the electoral system such as non-proportional voting and gerrymandering).

    instead of bailing out the capitalists over and over.

    Bailing out large enterprises that are valuable to the public is fine, as long as the shareholders don’t get rewarded for investing in a mismanaged but ‘too big to fail’ business (i.e. they lose most of their investment), and the end result is that the public own it, and put in competent management who act in the public interest. Over time, the public could pay forward previous generations investments, and eventually the public would own a huge suite of public services.