• Lushed_Lungfish@lemmy.ca
    link
    fedilink
    English
    arrow-up
    19
    ·
    17 hours ago

    Um, human history has repeatedly demonstrated that when a new technology emerges, the two highest priorities are:

    1. How can we kill things with this?
    2. How can we bone with this?
  • NutWrench@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    16 hours ago

    If you’ve ever wondered why porn sites use pictures of cars, buses, stop signs, traffic lights, bicycles and sidewalks in their captchas, it’s because they’re using the data to train car-driving AIs to recognize those patterns.

    This is not what an imminent breakthrough in cancer research looks like.

      • NotANumber@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        15 hours ago

        Google recaptcha? They literally talk about this publically. It’s in their mission statement or whatever. It’s used to train other kinds of model too.

        • nialv7@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          5 hours ago

          They were. They haven’t been using recaptchas to collect trainings day for years now.

        • RememberTheApollo_@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          15 hours ago

          Y’know, it’s bullshit that a) you seem to expect this to be common knowledge, as if everyone is supposed to have an archive of internet minutiae saved in their heads or have read and remembered any such info at all…

          And b) you chose to downvote and pretty much just said LMGTFY without even the sarcastically provided results instead of backing up your claim. It’s basic courtesy to provide a source for claims instead of downvoting like it’s some kind of affront to your ego that someone wants info on your claim.

          • NotANumber@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            14 hours ago

            It’s not even my claim you are talking about jackass. Read the usernames. If you have fallen into the rabbit hole that is Lemmy you should have been around enough to know about recapcha. If not it’s one DuckDuckGo search away. In fact you could just click the link on the recapcha itself that explains how they use the data for training. Hardly arcane knowledge.

            Your comment to me read like Sealioning.

            • RememberTheApollo_@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              14 hours ago

              Ah, that makes it so much better. My bad for you jumping into an argument randomly? You’re not improving my view of the shitty attitude here when you double down on “you should have known.”

  • humanspiral@lemmy.ca
    link
    fedilink
    English
    arrow-up
    6
    ·
    16 hours ago

    FYI, using openAI/ChatGPT is expensive. Programming it to program users into dependency on its “friendship” gets them to pay for more tokens, and then why not blackmail them or coerce/honeypot them into espionage for the empire. If you don’t understand yet that OpenAI is arm of Trump/US military, among its pie in the sky promises is $35B for datacenters in Argentina.

  • BilSabab@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    17 hours ago

    startups hyping shit up to get the investors drooling is one of the most despicable things a man can observe.

    • Soup@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      The thing that makes it actually bad is that they’re taking advantage of the mentally handicapped(investors). That and that said investors have millions to toss at nonsense while so many people are licky to have pennies to toss at such luxuries as “food”.

      Honestly though I don’t think taking advantage of evil people, who swear they deserve their millions because they’re definitely super smart, is really anything I care that much about.

  • TankovayaDiviziya@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    24 hours ago

    We are closer to making horny chatbots than a superintelligence figuring out a cure for cancer.

    Actually, if the latter wins, would that super AI win a Nobel prize?

  • frustrated@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 day ago

    No money in curing cancer with an LLM. Heaps of money taking advantage if increasingly alienated and repressed people.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      7
      ·
      22 hours ago

      There’s loads of money in curing cancer. For one you can sell the cure for cancer to people with cancer.

    • Saledovil@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 day ago

      You could sell the cure for a fortune. Imagine something that can reliably cure late stage cancers. You could charge a million for the treatment, easily.

      • frustrated@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 day ago

        Yes, selling the actual cure would be profitable…but an LLM would only ever provide the text for synthesizing it but none of the extensive testing, licensing, or manufacturing, etc… An existing pharmaceutical company would have to believe the LLM and then front the costs for the development, testing, and manufacture…which constitutes a large proportion of the costs of bringing a treatment to market. Burning compute time on that is a waste of resources, especially when fleecing horny losers is available right now. It is just business.

        • BeeegScaaawyCripple@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 day ago

          and LLMs hallucinate a lot of shit they “know” nothing about. a big pharma company spending millions of dollars on an LLM hallucination would crack me the fuck up were it not such a serious disease.

          • frustrated@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            23 hours ago

            Right, that is why I originally said there is no money in a cancer cure invented by LLM. It’s just not a serious possibility.

    • Valmond@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      22 hours ago

      What a weird take, research use AI already? Some researchers even research things that, gasp, is not monetiseable right away!

      • frustrated@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 hours ago

        I used to work in academic physics, and I currently work in data science. I am deeply familiar with both ends of the subject in question. LLMs are useful research tools because they speed up the reference finding and literature review process, not because they synthesize new information that does not need to be independently verified.

        In the context of medical research, they could absolutely use LLMs to facilitate a literature search. What LLMs cannot do is hand researchers a proposed cure that they could sell to people. You still need to do the leg work of synthesizing the molecules, standardizing the process, industrializing it, patenting it, multiple rounds of testing on increasingly complex animals and eventually people, and then going through the drug approval process with the FDA and others. LLMs speed up the CHEAPEST and EASIEST part of the research process. That is why LLMs will not be handing us the cure for cancer.

  • zxqwas@lemmy.world
    link
    fedilink
    English
    arrow-up
    67
    ·
    2 days ago

    Either (you genuinely belive) you are 18 (24, 36 does not matter) months away from curing cancer or you’re not.

    What would we as outsiders observe if they told their investors that they were 18 months away two years ago and now the cash is running out in 3 months?

    Now I think the current iteration of AI is trying to get to the moon by building a better ladder, but what do I know.

    • agamemnonymous@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      23 hours ago

      The thing about AI is that it is very likely to improve roughly exponentially¹. Yeah, it’s building ladders right now, but once it starts turning rungs into propellers, the rockets won’t be far behind.

      Not saying it’s there yet, or even 18/24/36 months out, just saying that the transition from “not there yet” to “top of the class” is going to whiz by when the time comes.

      ¹ Logistically, actually, but the upper limit is high enough that for practical purposes “exponential” is close enough for the near future.

      • dreugeworst@lemmy.ml
        link
        fedilink
        English
        arrow-up
        7
        ·
        21 hours ago

        why is it very likely to do that? we have no evidence to believe this is true at all and several decades of slow, plodding ai research that suggests real improvement comes incrementally like in other research areas.

        to me, your suggestion sounds like the result of the logical leaps made by yudkovsky and the people on his forums

        • agamemnonymous@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 hours ago

          Because AI can write programs? As it gets better at doing that, it can make AI’s that are even better, etc etc. Positive feedback loops increase exponentially.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        ·
        16 hours ago

        The problem with that is they can’t actually point to a metric where when the number goes beyond that point we’ll have ASI. I’ve seen graphs where they have a dotted line that says ape intelligence, and then a bit higher up it has a dotted line that says human intelligence. But there’s no meaningful way they can possibly have actually placed human intelligence on a graph of AI complexity, because brains are not AI so they shouldn’t even be on the graph.

        So even if things increase exponentially there’s no way they can possibly know how long until we get AGI.

      • SuperNerd@programming.dev
        link
        fedilink
        English
        arrow-up
        4
        ·
        21 hours ago

        Then it doesn’t make sense to include LLMs in “AI.” We aren’t even close to turning runs into propellers or rockets, LLMs will not get there.

  • FauxLiving@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    ·
    2 days ago

    False dichotomy.

    People using AI to cure cancer are not the people implementing weird chatbots. Doing one has zero effect on the other.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      16 hours ago

      That’s not what it’s saying though. It’s making the very reasonable point that if you (the leader of an AI company) think you’re about to have an AGI that can do sci-fi AGI things, then why the hell would you be developing chatbots, which are significantly less advanced technology, only for your chatbot to be immediately superseded by your AGI?

      If you’re about to get access to the world’s largest diamond mine you’re not going to spend the next few months messing around with get rich quick schemes.

    • glimse@lemmy.world
      link
      fedilink
      English
      arrow-up
      90
      ·
      2 days ago

      Pretty sure this is a direct dig against Sam Altman specifically who is making huge claims despite no evidence that they’re making progress on AGI.

      The people actually using AI to cure cancer were probably doing it before OpenAI (remember when we called it Machine Learning?) and haven’t been going to the media and Musking the completion date

    • Alloi@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      2 days ago

      sam altman and openai just announced they are allowing erotica for “verified users”

      its only a matter of time before they allow full blown pornographic content, the only thing is that you have to verify your ID. so, openai and the “gubment” will know all the depraved shit you will ask for, and i guarantee it will be used against you or others at some point.

      itll either become extremely addictive for the isolated who want the HER experience, or it will be used to undermine political discidents and anti fascist users.

      despite what people think, openai does in fact hand over data to the authorities (immediate government representatives) and that information is saved and flagged if they deem it necessary.

      basically if you say anything to chagpt, you can assume at some point it will be shared with law enforcement/government/government adjacent surveillance corporations like palantir.

      they used to say they would refuse to make this type of content, knowing full well the implications of what might happen if they did. now due to “public demand” they are folding.

      my advice, get a dumb phone, a digital camera, and a laptop to still have access to the internet and tools. reduce your physical ability to access the internet so readily. its saturated with AI, deep fakes, agents, and astroturfing bots that want you plugged in 100% of the time so they can further your addictions, manipulate you, and extract as much data from you as possible.

      • Knock_Knock_Lemmy_In@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        basically if you say anything to chagpt, you can assume at some point it will be shared with law enforcement/government/government adjacent surveillance corporations like palantir.

        That’s why I have all my private with deepseek.

      • Grimy@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        I’m an adult, there’s no reason I can’t have the bot talk dirty to me. That’s a lot of text for essentially saying you wish the censorship stayed.

        Surveillance state and data extraction are real issues that need to be tackled at the root (which isn’t AI).

        • Alloi@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          15 hours ago

          i feel like you really need to take the time to research the implications of AI and surveillance more. specifically how the US and virtually every government and tech corporation on the planet is intending/already using them, together.

          AI exists because of illegal/unconsentual data extraction, its almost entirely built on it, and theres no way that will likely ever stop. you can attempt to regulate it, but the US governement deregulated it on purpose for a bribe and the promise of more control.

          it wont happen while money and power still exist at the end of the rainbow.

          • Grimy@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            15 hours ago

            This is precisely why I’m saying AI and our surveillance state are completely different issues.

            Yes the surveillance state is really bad and we need comprehensive laws that protect our personal data.

            That being said, what the copyright industry is desperately clinging to has nothing to do with that. Your second paragraph has nothing to do with the first. So I don’t agree with both, I only agree with the first, which is my main point, about how they are seperate issues.

            • Alloi@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              3 hours ago

              my main point is that there is no possible way to seperate the two. you can dream about it if you like, work on it on a limited local version, but even that has its risks. but if you ever want to use the big models, there will be no separation between AI and the state. you are being surveilled. and there is zero shot they will ever relinquish that power once they have it. even “altruistic” governments wouldnt do that, for them its a matter of national security at this point.

              i am serious when i say this, you shouldnt listen to me and you should fight tooth and nail to stop this if you think its possible, i am doing my part by warning as many people as i can, and spreading the word whenever possible. both AI and surveillance states, even when completely separated, are going to be a massive detriment to the average persons freedom in the long run. added together, we are pretty much screwed already. its already being implemented.

              cats out the bag, pandoras box, all that.

    • Valmond@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      2 days ago

      Yes, AI already helps in oncology research and has for years and years, probably decades.

    • Digitalprimate@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 days ago

      You’re getting downvoted because of how you put it. Most people do not understand the difference between AI used for research (like protein sequencing) and LLMs.

      Also, the people making LLMs are not making protein sequencers.

      • ryedaft@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 days ago

        No, OP is about how OpenAI said they were releasing a chatbot with PhD level intelligence about half a year ago (or was it a year ago?) and now they are saying that they’ll make horny chats for verified adults (i.e. paying customers only).

        What happened to the PhD level intelligence Sam?! Where is it?

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        I agree, for most people ‘AI’ is ChatGPT and their perception of the success of AI is based on social media vibes and karma farming hot takes, not a critical academic examination of the state of the field.

        I’m not remotely worried about the opinions of random Internet people, many of which are literally children just dogpiling on a negative comment.

        Reasonable people understand my point and I don’t care enough about the opinions of idiots to couch my language for their benefit.

    • Jankatarch@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 days ago

      Ah I see the misunderstanding. Government pivoting is the problem.

      NIH blood cancer research was defunded few months ago while around same time government announced they will be building 500-billion datacenters for LLMs.

      “If LLM becomes AGI we won’t need the image-recognition linear algebra knowledge anymore, obviously.”

      Researchers are still the good and appreciated no matter what annoying company is deploying their work.

  • CovfefeKills@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    23 hours ago

    Well that is just basicness implied as if was intelligence. If you cannot work with anyone do not fucking cry when you are the common problem. Quote me cus I will be quoting myself.

  • AeonFelis@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    You can’t solve cancer because cancer is not a problem. It’s a solution. Humans are the problem.