• 0 Posts
  • 8 Comments
Joined 1 year ago
cake
Cake day: July 3rd, 2023

help-circle
  • I don’t like the idea of restricting the model’s corpus further. Rather, I think it would be good if it used a bigger corpus, but added the date of origin for each element as further context.

    Separately, I think it could be good to train another LLM to recognize biases in various content, and then use that to add further context for the main LLM when it ingests that content. I’m not sure how to avoid bias in that second LLM, though. Maybe complete lack of bias is an unattainable ideal that you can only approach without ever reaching it.





  • You’re trying to apply objectivity to a very subjective area. I’m not saying it’s impossible, and you should by all means try it, but maybe it would be a good idea to try something that has a better chance, first, such as this:

    How about an open platform for scientific review and tracking? Like, whenever a new discovery or advance is announced, that site would cut through the hype, report on peer review, feasibility, flaws in methodology, the ways in which it’s practical and impractical, how close we are to actual usage (state of clinical trials, demonstrated practical applications, etc.)

    And it would keep being updated, somewhat like Wikipedia, as more research occurs. It needs a more robust system of review to avoid the problems that Wikipedia has, and I don’t have the solution for that, but I believe there’s got to be a way to do it that’s resistant to manipulation.