• ExLisperA
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 hours ago

    Where are you going to deploy it? On your laptop?

    • iAvicenna@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      3 hours ago

      On a 200mil laptop? You can run llama4 on a 64GB RAM machine, albeit slowly, which is already an upper scale model. TBF, I didn’t do the math to see how much that would add up to along with salaries and server costs etc, mainly because this is pentagon we are talking about so it should already have access to some pretty decent computational capacity. So yea 200mil feels like too much when you already have most of the resources needed (compute and open LLM models for specific tasks).

      The really huge upside is you don’t have to share confidential information with a company whose CEO is a lunatic who will likely have no qualms about sharing that data with other agents when money and power is involved. Hell you shouldn’t share any confidential/sensitive information with any of the large tech companies to be honest. They have become what they are not by sticking to ethical principles and they are likely to grossly overcharge (which defeats the purpose of outsourcing and makes it more reasonable to invest in permanent infrastructure rather). They will surely use it as some sort of leverage, %100 guaranteed.

      • ExLisperA
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        I agree that $200M is way too much to spend on a LLMs but talking about downloading open source models is completely missing the point. They are not paying for some sort of Grok license so that they can access this amazing model. They are paying for the computational capacity needed to run this model and provide access to thousands of people over some period of time. The alternative here is to simply buy everyone a subscription to OpenAI or something.

        • iAvicenna@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          With open source you have the advantage of being able to use different LLMs for different tasks which can be more efficient. Surely Pentagon has access to enough compute power to set this up for a thousand people? The rest is UI, IT and fine tuning by a couple data scientists/programmers trained in LLMs. Surely it is better than Elon who changes his mind on politics every five days and thinks that twenty year olds can run critical government infrastructure because they worship him. Not an expert, just don’t like big tech companies, particularly Melon.

          • ExLisperA
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 hours ago

            compute power to set this up for a thousand people […], UI, IT and fine tuning by a couple data scientists/programmers trained in LLMs.

            Yes, that’s what needed. It’s not just about downloading an open source LLM. That was my point. I see we agree now.