Current AI models are simply too unwieldy, brittle and malleable, academic and corporate research shows. Security was an afterthought in their training as data scientists amassed breathtakingly complex collections of images and text. They are prone to racial and cultural biases, and easily manipulated.

  • ConsciousCode@beehaw.org
    link
    fedilink
    arrow-up
    23
    ·
    1 year ago

    It sounds simple but data conditioning like that is how you get scunthorpe being blacklisted, and the effects on the model even if perfectly executed are unpredictable. It could get into issues of “race blindness”, where the model has no idea these words are bad and as a result is incapable of accommodating humans when the topic comes up. Suppose in 5 years there’s a therapist AI (not ideal but mental health is horribly understaffed and most people can’t afford a PhD therapist) that gets a client who is upset because they were called a f**got at school, it would have none of the cultural context that would be required to help.

    Techniques like “constitutional AI” and RLHF developed after the foundation models really are the best approach for these, as they allow you to get an unbiased view of a very biased culture, then shape the model’s attitudes towards that afterwards.

    • sciawp@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      I agree with you but I’m just gonna say with basic regex (hell, even without regex) you can easily find bad words without the problem you mentioned above.

      Word filters tend to suck in online games and stuff because they have to navigate players trying to avoid the filter, which I think could still be improved with a little effort