• perishthethought@piefed.social
    link
    fedilink
    English
    arrow-up
    143
    ·
    5 days ago

    mainstream

    I’ll believe that when my sisters start saying this. Till then, it’s just us privacy fans screaming in a dark cave, enjoying the echo.

    • Xorg_Broke_Again@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      79
      ·
      5 days ago

      It’s always like this. We get a ton of articles on how everyone is suddenly boycotting/deleting [insert thing] but when you ask someone in real life, they usually have no idea what you’re talking about.

      • The Quuuuuill@slrpnk.net
        link
        fedilink
        English
        arrow-up
        22
        ·
        5 days ago

        so explain it to them gently. you won’t reach everyone, but you’ll reach more people than accepting this status quo

      • EldritchFemininity@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        9
        ·
        4 days ago

        The one thing I will say is that there does seem to be a generalized dislike for AI that has all the investors and upper management types nervous. Even by their own studies do people generally either not care about AI in their products or actively dislike it/find it intrusive. There was a study by a phone company from this past summer or fall that concluded that 80% of their users had no interest in AI or found that it actively made their experience worse, and there have been plenty of pretty damning reports about how useful it’s been in various industries (just look at Microslop). That is not conducive to convincing investors to fund your product and does not show a viable path to making a profit in the future.

        We’ve seen similar things happening recently with car manufacturers walking back on their big touchscreens (with some help from regulation in civilized places that care about things like “pedestrian fatalities” - like Europe) due to consumer sentiment. They tried for nearly a decade to push bigger and bigger screens into cars and remove physical buttons, and now they’re moving in the other direction. Completely anecdotal evidence, but the last time I went to buy a car I told the salesman at the dealership that I wasn’t interested in cars newer than a certain year because that was when they increased the size of the screen and put them in a more obnoxious spot on the dashboard, and he said that he heard similar sentiments from practically everybody who came in looking to buy a car - everybody hated the bigger screens.

    • criscodisco@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 days ago

      I had a coworker tell me how cool Copilot was because he asked it a question and it found the answer in an email in his outlook mailbox. I thought, “you needed AI to search your email?”

      We are probably cooked.

    • scarabic@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      I know what you mean. It’s a pretty vague term though. You could argue that as soon as it enters the midsection of the bell curve at all, it’s “in the mainstream.” It doesn’t have to have captured a full 90% of the bell curve.

  • scarabic@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    3 days ago

    Yeah instead of arguing over whether Anthropic is actually good, let’s unite around “fuck OpenAI.”

  • raskal@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    77
    ·
    5 days ago

    Canada recently has had its 2nd worst school shooting ever. The killer had many interactions with ChatGPT that warranted banning her account. A whistleblower has claimed that they wanted to inform Canada’s police force of these comments but were denied by ChatGPT’s management.

    They had a chance to stop the death of 8 people, most of which were young children, but failed to do anything.

    FUCK CHATGPT AND THOSE BASTARDS THAT RUN IT

    • Chaotic Entropy@feddit.uk
      link
      fedilink
      English
      arrow-up
      31
      ·
      5 days ago

      Sam Altman is just some fail upward money guy, he’s been eventually removed from basically every prior position he has held.

      • PolarKraken@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        19
        ·
        5 days ago

        Seems like his career has largely been lying and making impossible promises, so. The folks who do that well always manage to exit the stage before the magic tincture is revealed to just be piss 🤷‍♂️

      • scarabic@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        The more I learn about this guy, the more amazed I am that his staffers stood up for him when he got fired. I guess they just hated the board more.

  • /home/pineapplelover@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    19
    ·
    4 days ago

    Dude the only guardrails are

    1. No fully automated killings

    2. No mass surveillance

    You could literally do anything else, you could automate killing people with a person approving.

    Trump booted anthropic because they couldn’t lift these two guardrails. Fuck me

  • Paddy66@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 days ago

    I would assume that Anthropic’s stance is mostly performative. But while people are in boycotting mood they could solve the surveillance problem by quitting ALL big tech products. Here’s our site that lists all the ethical, non-spyware alternatives:

    https://www.rebeltechalliance.org/stopusingbigtech.html

    (Please share with your friends and family - we have zero marketing budget - thank you!)

  • JigglypuffSeenFromAbove@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    4 days ago

    From OpenAI’s statement:

    We have three main red lines that guide our work with the DoW, which are generally shared by several other frontier labs:

    • No use of OpenAI technology for mass domestic surveillance.

    • No use of OpenAI technology to direct autonomous weapons systems.

    • No use of OpenAI technology for high-stakes automated decisions (e.g. systems such as “social credit”).

    It specifically states their AI can’t/won’t be used for surveillance and autonomous weapons. Of course I’m not saying I trust them, but isn’t this the same thing Anthropic says they’re against? What’s the difference here or what did I miss?

    • muusemuuse@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      16
      ·
      4 days ago

      Anthropic put clauses in that were legally enforceable by future administrations. OpenAI says “yea we totally trust you bro”

    • flamingleg@lemmy.ml
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 days ago

      the ‘no domestic surveillance’ is just language that mirrors some limitations (from their pov) from the patriot act. They’re still willing to surveil people outside the USA, and in fact all they have to do is route domestic traffic through an international part of a network and they can legally spy on domestic americans which is what already happens.

    • WhatsHerBucket@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 days ago

      Same! Was planning on doing this today.

      What do you plan to switch to? I’m currently thinking a combination of Claude and something else for images if it turns out I really need to pay for it.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      5 days ago

      Anthropic is scum, accepting money from foreign dictators, forcing their software on minorities while insisting it was conscious and had emotions just like them, praising the Trump administration, making up scary stories to get more funding…

      …In many ways, they’re worse than OpenAI. They’re just running with the same playbook that Sam Altman used to use to pretend he was a good guy.

  • Fmstrat@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    4 days ago

    Since this article, Anthropic’s Claude AI app has claimed the #1 top spot over ChatGPT on both Android and iOS.

    • sveltecider@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 days ago

      I bought Claude premium. I’m not rich enough for $28 CAD a month tho so im only doing one month lol

  • pnelego@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    4 days ago

    I’m wondering if this is a play for a future bailout. OpenAI knows they are fucked; and instead of just going away like most companies do when they fail, they are embedding themselves in the government to secure a bailout under the guise of a critical defence vendor.

    Furthermore, I’m not convinced the researchers and critical personnel will work for a company that does this. I think we’re about to see the biggest jumping of a ship so far in the industry.

    • grrgyle@slrpnk.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 days ago

      🫡 I’m going to try pressure my employer to do the same. Like is this thing saving anybody money??

  • humanspiral@lemmy.ca
    link
    fedilink
    English
    arrow-up
    45
    ·
    5 days ago

    Use for “all lawful means” is quite the grey area considering no one was arrested or fired, or any law updated, for what Snowden leaked. If the NSA does it, no one will arrest the NSA.

    • frog_brawler@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 days ago

      I laughed when I read “all lawful means.”

      Those are almost the exact words that you’re supposed to use for a NFA form 1 / 4 when registering certain types of firearms / firearms parts that require a tax stamp, and additional scrutiny.

      When I did my SBR registration, it was “all lawful purposes…” but fuck, close enough…