• foggy@lemmy.world
    link
    fedilink
    arrow-up
    21
    ·
    2 months ago

    I tell people who work under me to scrutinize it like it’s a Google search result chosen for them using the old I’m Feeling Lucky button.

    Just yesterday I was having trouble enrolling a new agent in my elk stack. It wanted me to obliterate a config and replace it with something else. Literally would have broken everything.

    It’s like copying and pasting stack overflow into prod.

    AI is useful. It is not trustworthy.

      • criss_cross@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        2 months ago

        When it works it can save time automating annoying tasks.

        The problem is “when it works”. It’s like having to do code reviews mid work every time the dumb machine does something.

        • finitebanjo@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          2 months ago

          So it causes more harm or loss than benefit. So it’s not useful.

          “When it works” it creates the need for oversight because “when it doesn’t work” it creates massive liabilities.

      • foggy@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        If you would say the same for stack overflow and Google, then sure.

        Otherwise, absolutely not.

    • Empricorn@feddit.nl
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 months ago

      I know nothing about stacking elk, though I’m sure it’s easier if you sedate them first. But yeah, common sense and a healthy dose of skepticism seems like the way to go!

    • The Picard Maneuver@piefed.worldOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      Yeah, you just have to practice a little skepticism.

      I don’t know what its actual error rate is, but if we say hypothetically that it gives bad info 5% the time: you wouldn’t want a calculator or an encyclopedia that was wrong that often, but you would really value an advisor that pointed you toward the right info 95% of the time.

      • deranger@sh.itjust.works
        link
        fedilink
        arrow-up
        7
        ·
        edit-2
        2 months ago

        5% error rate is being very generous, and unlike a human, it won’t ever say “I’m not sure if that’s correct.”

        Considering the insane amount of resources AI takes, and the fact it’s probably ruining the research and writing skills of an entire generation, I’m not so sure it’s a good thing, not to mention the implications it also has for mass surveillance and deepfakes.

    • ccunning@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      I think of it like talking to some random know-it-all that saddles up next to you at the bar. Yeah, they may have interesting stories but are you really going to take legal advice from them?

    • redsand@lemmy.dbzer0.comBannedBanned from community
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 months ago

      Skynet takes this as an insult. Next you’ll imply that it’s an Oravcle product.

  • ORbituary@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    2 months ago

    Amanitas won’t kill you. You’d be terribly sick if you didn’t prepare it properly, though.

    Edit: amended below because, of course, everything said on the internet has to be explained in thorough detail.

    • luciferofastora@feddit.org
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      2 months ago

      Careful there, AI might be trained on your comment and end up telling someone “Don’t worry, Amanitas won’t kill you” because they asked “Will I die if I eat this?” instead of “Is this safe to eat?”

      (I’m joking. At least, I hope I am.)

          • anomnom@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 months ago

            Yeah, thinking that these things have actual knowledge is wrong. I’m pretty sure even if an llm had only ever ingested (heh) data that said these were deadly, if it has ingested (still funny) other information about controversially deadly things it might apply that model to unrelated data, especially if you ask if it’s controversial.

            • luciferofastora@feddit.org
              link
              fedilink
              arrow-up
              2
              ·
              2 months ago

              They have knowledge: the probability of words and phrases appearing in a larger context of other phrases. They probably have a knowledge of language patterns far more extensive than most humans. That’s why they’re so good at coming up with texts for a wide range of prompts. They know how to sound human.

              That in itself is a huge achievement.

              But they don’t know the semantics, the world-context outside of the text, or why it’s critical that a certain section of the text must refer to an actually extant source.

              The pitfall here is that users might not be aware of this distinction. Even if they do, they might not have the necessary knowledge themselves to verify. It’s obvious that this machine is smart enough to understand me and respond appropriately, but we must be aware just which kind of smart we’re talking about.

  • Bennyboybumberchums@lemmy.world
    link
    fedilink
    arrow-up
    8
    ·
    2 months ago

    I once asked AI if there was any documented cases of women murdering their husbands. It said NO. I challenged it multiple times. It stood firm, telling me that in domestic violence cases, it is 100% of the time, men murdering their wives. I asked “What about Katherine Knight?” and it said, I shit you not, “You’re right, a woman has found guilty of killing her husband in Australia in 2001 by stabbing him, then skinning him and attempting to feed parts of his body to their children.”…

    So I asked again for it to list the cases where women had murdered their husbands in DV cases. And it said… what for it… “I cant find any cases of women murdering their husbands in domestic violence cases…” and then told me of all the horrible shit that happens to woman at the hands of assholes.

    Ive had this happen loads of times, over various subjects. Usually followed by “good catch!” or “Youre right!” or “I made an error”. This was the worst one though, by a lot.

    • The Picard Maneuver@piefed.worldOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 months ago

      It’s such weird behavior. I was troubleshooting something yesterday and asked an AI about it, and it gave me the solution that it claims it has used for the same issue for 15 years. I corrected it “You’re not real and certainly were not around 15 years ago”, and it did the whole “you’re right!” thing, but then also immediately went back to speaking the same way.

  • Aceticon@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    2 months ago

    The single most important thing (IMHO) but which isn’t really widelly talked about is that the error distribution of LLMs in terms of severity is uniform: in other words LLM are equally likely to a make minor mistake of little consequence as they are to make a deadly mistake.

    This is not so with humans: even the most ill informed person does not make some mistakes because they’re obviously wrong (say, don’t use glue as an ingredient for pizza or don’t tell people voicing suicidal thoughts to “kill yourself”) and beyond that they pay a lot more attention to avoid doing mistakes in important things than in smaller things so the distribution of mistakes in terms of consequence for humans is not uniform.

    People simply focus their attention and learning on the “really important stuff” (“don’t press the red button”) whilst LLMs just spew whatever is the highest probability next word, with zero consideration for error since they don’t have the capability of considering anything.

    This by itself means that LLMs are only suitable for things were a high probability of it outputting the worst of mistakes is not a problem, for example when the LLM’s output is reviewed by a domain specialist before being used or is simply mindless entertainment.

    • The Picard Maneuver@piefed.worldOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      I once saw a list of instructions being passed around that were intended to be tacked on to any prompt: e.g. “don’t speculate, don’t estimate, don’t fill in knowledge gaps”

      But you’d think it would make more sense to add that into the weights rather than putting it in your prompt and hoping it works. As it stands, it sometimes feels like making a wish on the monkey paw and trying to close a bunch of unfortunate cursed loopholes.

      • Tessellecta@feddit.nl
        link
        fedilink
        arrow-up
        3
        ·
        2 months ago

        Adding it into the weights would be quite hard, as you would need many examples of text where someone is not sure about something. Humans do not often publish work that have a lot of that in it, so the training data does not have examples of it.

      • Chais@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        2 months ago

        Simple solution: don’t use the stupid things. They’re a waste of energy, water and time in the best case.

  • DarkCloud@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    2 months ago

    You have to slice a fry it on low heat (so that the psychedelics survive)… Of course you should check the gills don’t go all the way to the stem, and make sure the spore print (leave the cap on some black paper overnight) comes out white.

    Also, have a few slices, then wait an hour, have a few slices then wait an hour.

  • potoooooooo ✅️@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    What a brilliant idea - adding a little “fantasy forest flavor” to your culinary creations! 🍄

    Would you like me to “whip up” a few common varieties to avoid, or an organized list of mushroom recipes?

    Just let me know. I’m here to help you make the most of this magical mushroom moment! 😆

  • luciferofastora@feddit.org
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    2 months ago

    The difficulty with general models designed to sound human and trained on a wide range of human input is that it’ll also imitate human error. In non-critical contexts, that’s fine and perhaps even desirable. But when it comes to finding facts, you’d be better served with expert models trained on curated, subject-specific input for a given topic.

    I can see an argument for general models to act as “first level” in classifying the topic before passing the request on to specialised models, but that is more expensive and more energy-consuming because you have to spend more time and effort preparing and training multiple different expert models, on top of also training the initial classifier.

    Even then, that’s still no guarantee that the expert models will be able to infer context and nuance the same way that an actual human expert would. They might be more useful than a set of static articles in terms of tailoring the response to specific questions, but the layperson will have no way of telling whether its output is accurate.

    All in all, I think we’d be better served investing in teaching critical reading than spending piles of money on planet-boilers with hard-to-measure utility.

    (A shame, really, since I like the concept. Alas, reality gets in the way of a good time.)

  • thespcicifcocean@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    2 months ago

    Apparently, that’s a fly agaric, which some sources on the internet say can be used to get you high. I still wouldn’t do it unless an actual mycologist told me that it was okay

    • sploosh@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      2 months ago

      You can, but people rarely do more than once, which should be an indication of how much fun it is.

    • untorquer@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      2 months ago

      Not a mycologist but…

      Fry in a bit of butter. Taste is really good, i guess the muscimol is also a flavor enhancer. Cooking flashes off the other toxins. If eaten raw it will be a night on the toilet.

      Can make you nauseas even when cooked, depends on your biology in general or on a given day. High is similar to alcohol. But it’s also a sleep aid similar to ambien.

      Red cap with white specks, otherwise white. veil annulus, gilled, white sporeprint. Fruits in late summer through fall.

    • MrsDoyle@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      Back in the 1970s some friends and I ate some fly agaric we found in the botanic gardens, because we’d heard it got you high. The other three went on to have a fantastic time, high as kites, in a nightclub. Me? I spent hours on my knees in front of the toilet, vomiting and vomiting and vomiting. Do not recommend.

    • JadenSmith@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      I’ve done it a few times. Colours pop, some mood change, but overall it’s weak and not worth it. I didn’t get negative effects, it’s just a crap mushroom experience if you can get ahold of psilocybin mushrooms.

      • Dasus@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        2 months ago

        Seconded.

        Should prolly try shamans piss version of amanitas. You know where a proper geezer who’s been eating these for decades dries them properly, then eats a whole bunch, then pisses in a dish and then you drink the piss.

        That would probably get closer to the roots of what amanitas are about. I had a similar very mild but in no way negatively experience as you.

        Laid on a sofa and it felt slightly like as if on a magic carpet through space. But like, that needed imagination, I wasn’t experiencing that, but if I had to describe what sort the mild feeling was.

  • TrackinDaKraken@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    Which is why it should only be used for art.

    I don’t believe the billionaire dream of robot slaves is going to work nearly as soon as they’re promising each other. They want us to buy into the dream without telling us that we’d never be able to afford a personal robot, they aren’t for us. They don’t want us to have them. The poors are slaves, they don’t get slaves. It’s all lies, we’re not part of the future we’re building for them.

    • jedibob5@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      2 months ago

      should only be used for art

      No, churning out uncanny valley slop built on mass IP theft ain’t it, either. Personally I think AI is best used for simulations and statistical models of engineering problems, where it can iteratively find optimized solutions faster and sometimes more accurately than humans. The focus on “generative AI” and LLMs trying to get computers to act like humans is incredibly pointless, IMO. Let computers do what computers are good at, and humans do what humans are good at.

      • groet@feddit.org
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        LLMs are an incredible interface to other systems. Why learn a new system language to get information’s when you can use natural language to ask and the AI will translate into the system language and do the lookup for you and then translate the result into natural language again. The important part is, the AI never gives an answer to your question itself, it just translates between human language and system language.

        Use it as a language machine and never as a knowledge machine!

        • jedibob5@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 months ago

          AI’s tendency to hallucinate means that for it to be actually reliable, a human needs to double-check all of its output. If it is being used to acquire and convey information of any kind to the prompter, you might as well just skip the AI and find the information manually, as you’d have to do that anyway to validate what it told you.

          And AI hallucinations are a side effect of the fundamental way in which generative AI works - they will never be 100% accounted for. When an AI generates text, it is simply predicting what word is likely to come next based on its prompt in relation to its training data. While this predictive ability has become remarkably sophisticated within the last few years (more than I thought it ever would, tbh), it is still only a predictive text generator. It’s not “translating,” “understanding,” or “comprehending” anything about whatever subject it has been asked about - it is merely predicting the likelihood of the next word in its response based on its training data.

  • Credibly_Human@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    2 months ago

    people using ai tools for things they’re not good for and then calling the tool bad generally as opposed to bad for said task do a disservice to any real issues currently surrounding the topic such as environmental impact, bias, feedback loops, the collapse of Internet monetization and more.