• corroded@lemmy.world
    link
    fedilink
    English
    arrow-up
    66
    ·
    1 day ago

    If you don’t want your conversations to be public, how about you don’t tick the checkbook that says “make this public.” This isn’t OpenAI’s problem, its an idiot user problem.

    • zerozaku@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      22 hours ago

      This is a case of corporation taking advantage of technically idiotic userbase, which is most of the general public. OpenAI using a dark pattern so that users can’t easily unchecked that box nor making that text that says “this can be indexed by search engines” brightly visible.

    • FauxLiving@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      ·
      1 day ago

      If you don’t want corporations to use you chats as data, don’t use corporate hosted language models.

      Even non-public chats are archived by OpenAI, and the terms of service of ChatGPT essentially give OpenAI the right to use your conversations in any way that they choose.

      You can bet they’ll eventually find ways to monetize your data at some point in the future. If you think GoogleAds is powerful, wait until people’s assistants are trained with every manipulative technique we’ve ever invented and are trying to sell you breakfast cereals or boner pills…

      You can’t uncheck that box except by not using it in the first place. But people will sell their soul to a company in order to not have to learn a little bit about self-hosting

      • puck@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        Hi there, I’m thinking about getting into self-hosting. I already have a Jellyfin server set up at home but nothing beyond that really. If you have a few minutes, how can self-hosting help in the context of OPs post? Do you mean hosting LLMs on Ollama?

        • BreadstickNinja@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 day ago

          Yes, Ollama or a range of other backends (Ooba, Kobold, etc.) can run LLMs locally. Huggingface has a huge number of models suited to different tasks like coding, storywriting, general purpose, and so on. If you run both the backend and frontend locally, then no one monetizes your data.

          The part I’d argue that the previous poster is glazing over a little bit is performance. Unless you have an enterprise-grade GPU cluster sitting in your basement, you’re going to make compromises on speed and/or quality relative to the giant models that run on commercial services.

          • puck@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            Thanks for the info. Yeah, I was wondering what kind of hardware you’d need to host LLMs locally with decent performance and your post clarifies that. I doubt many people would have the kind of hardware required.

  • DeceasedPassenger@lemmy.world
    cake
    link
    fedilink
    English
    arrow-up
    25
    ·
    1 day ago

    I assumed this was a given. Anything offered to tech overlords will be monetized and packaged for profit at every possible angle. Nice to know it’s official now, I guess.

  • atticus88th@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 day ago

    I’ll probably have a target on my back because I kept asking it how to replace CEOs and other executives who do literally nothing but collect a paycheck and break shit.