So, here’s a fuck up from earlier this year:
A family friend came over one day. I was out by my car, having returned from a visit to the shops. I hadn’t seen him in years, and he asks how I was. I responded with “Surviving”, before saying something about my degree progress and stuff.
He goes a bit quiet and awkward, eventually making his way inside while I finished what I was doing.
I walked inside and walked past my parents talking to him. Then I remembered something. His partner was diagnosed with a brain tumour that had metastasised from breast cancer. I also remembered that a few days ago, my parents went to visit his partner in the palliative care unit because she lost the fight. I realised then that he clearly came around to tell my parents that she had passed away. She fucking died and I responded with “Surviving”.
The thing that annoys me most is that there have been studies done on LLMs where, when trained on subsets of output, it produces increasingly noisier output.
Sources (unordered):
Whatever nonsense Muskrat is spewing, it is factually incorrect. He won’t be able to successfully retrain any model on generated content. At least, not an LLM if he wants a successful product. If anything, he will be producing a model that is heavily trained on censored datasets.