• A Wild Mimic appears!@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    96
    ·
    2 months ago

    Holy shit, the amount of surveillance the teens are under is ungodly and people blame the chatbot? And there wasn’t even a human kind enough to speak with the girl before calling the fucking cops? I see a lot of blame to place here, but it’s not the chatbot who is to blame.

    • The kids for bullying her for her tan
    • The school boards implementing the surveillance
    • The parents who allowed such surveillance in the first place
    • The person screening what was flagged for not sending the school counselor to talk with the kid
    • The person calling the cops
    • The cops for arresting an 8th-grader and DOING A STRIP SEARCH AND KEEPING HER OVERNIGHT WTF instead of handing her over to her parents

    Everyone of them failed a 13 year old girl. All of them should be ashamed.

    • DeathByBigSad@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      20
      ·
      2 months ago

      The cops for arresting an 8th-grader

      This is America, that’s what they do. They love overreacting to small problems.

      I was arrested for self-defence in a highschool fight, the actual bully who attack me did not get in any sort of trouble. If I didn’t have citizenship, there was a chance that incident could’ve led to my deportation, even tho I was a minor. (USCIS can see all your arrests, including those that did not led to a conviction, or even expunged or pardoned offences, and they could retroactively revoke your legal status if they find out you lied.) But luckily charges were dropped because of couse they don’t have the evidence to prove it and I have a clean record so they didn’t bother prosecuting.

      There is probably an alternate timeline somewhere out there in the multiverse where I got deported and had to learn another language that I haven’t spoken for over a decade. Depressing to think about.

      (Well that is still technically a possibility, all they have to do is make up some bullshit about “being a spy” and put me in gitmo)

      • CancerMancer@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 months ago

        I’m in Canada and it’s only marginally better with respect to police under/overreaction. A friend and I once got the “don’t go to school on X day” message and we went immediately to local, provincial, and federal police. No one took us seriously. We had a friend working at CSIS (American analogue would be CIA) look into it and later that week we saw the article in a local paper.

        Police investigated the home and found:

        • 5000 rounds of ammunition
        • body armor
        • explosives
        • only thing he couldn’t get was legal firearms because of his history of mental illness, but he had been working on connections to acquire illegal ones

        Point being we couldn’t get the police to lift a finger to check out what we believed to be a credible threat (this guy never even joked about that stuff), but boy were they willing to burn rubber racing to my school when I committed the crime of defending myself in a “normal” school fight and one of my bullies claimed they felt threatened by me. This event set off a whole series of events, like requiring me to get a full evaluation at a psychiatric facility, before being allowed back in school. Our system is broken.

      • Ensign_Crab@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        They love overreacting to small problems.

        It’s what they do instead of reacting to major problems in any way.

      • JennyLaFae@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        One of my possible theories is that the alternate timelines diverge for each of us at moments we could have died. The timeline diverges and one continues on with us and one without us; sometimes while “dying” timelines merge back together resulting in stories like reddit’s r/glitchinthematrix

        So if it’s any consolation, your bully probably died in your deportation timeline.

        • bthest@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 months ago

          At any moment a tiny bit of clotted blood cells could suddenly lodge somewhere inconvenient and kill you so this timeline shit would be happening every second 24/7. Kind of renders these timeline thought experiments pointless.

          • JennyLaFae@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            That would just be a possibility until it actually happens, until the actual crisis point.

            For example, we’re not diverging with every step on a flight of stairs. However, have you ever experienced that moment of vertigo where you thought you missed a step and then felt your foot land solid on the next? That would be the moment.

    • Infernal_pizza@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 months ago

      The kids for bullying her for her tan

      To me it didn’t sound like she was being bullied, it seemed like her friends made a stupid joke and then she responded with another stupid joke. Which makes it even stupider that she got arrested. Literally just kids being kids.

  • TrackinDaKraken@lemmy.world
    link
    fedilink
    English
    arrow-up
    54
    ·
    2 months ago

    Arrested and strip-searched for a first offense? That’s fucking ridiculous. I hope the lawsuit succeeds. It’s the only peaceful tool we have to curb over-zealous law enforcement.

    Before the morning was even over, the Tennessee eighth grader was under arrest. She was interrogated, strip-searched and spent the night in a jail cell, her mother says.

    Earlier in the day, her friends had teased the teen about her tanned complexion and called her “Mexican,” even though she’s not. When a friend asked what she was planning for Thursday, she wrote: “on Thursday we kill all the Mexico’s.”

    • Zak@lemmy.world
      link
      fedilink
      English
      arrow-up
      32
      ·
      2 months ago

      This is an ass-covering response to school shootings, because some of the shooters have expressed their intent before.

      A strip search obviously isn’t necessary even if it’s a credible threat; a metal detector wand and basic pat down is more than enough to ensure someone doesn’t have a gun. This wasn’t a credible threat though, and a chat with the school counselor would have been the right way to handle this.

      • Passerby6497@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 months ago

        Yeah, that was my first thought too. I can see the need to take anything that resembles an actionable threat seriously, but that poor kid did not deserve to be abused by law enforcement like that.

  • 2910000@lemmy.world
    link
    fedilink
    English
    arrow-up
    52
    ·
    2 months ago

    Students who think they are chatting privately among friends often do not realize they are under constant surveillance

    This is the problem

  • kalkulat@lemmy.world
    link
    fedilink
    English
    arrow-up
    38
    ·
    edit-2
    2 months ago

    Lots of wannabe authoritarians out there in educationland.

    All those decades that the schools just -couldn’t afford- more (well-educated) teachers and smaller class sizes. Lots of low-end look-good.

    And then along came tech, and lo-and-behold, IT was going to be the savior. Let’s buy into that! We may not be able to teach them to read, write or think, but they can learn to kneel!

    • bthest@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      2 months ago

      Sorry I could believe this if it weren’t for your right-wing chud reveal at the end. Also chuds tend to have pedo tendencies so I think you’re misrepresenting what you were banned for; as much as I hate to give the benefit of the doubt to reddit’s moderation team.

  • Grandwolf319@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    24
    ·
    2 months ago

    What is sad is that an environment like this ruins someone’s mental health and ironically increasing the overall risk of violence.

    • belit_deg@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 months ago

      Don’t remember whose quote it was, maybe Hannah Arendt, that the real tragedy of tyranny is not when people self-censor what they say out loud, but when this leads them to filter out those thoughts from arising at all

  • 𞋴𝛂𝛋𝛆@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    edit-2
    2 months ago

    It is not the tool, but is the lazy stupid person that created the implementation. The same stupidity is true of people that run word filtering in conventional code. AI is just an extra set of eyes. It is not absolute. Giving it any kind of unchecked authority is insane. The administrators that implemented this should be what everyone is upset at.

    The insane rhetoric around AI is a political and commercial campaign effort by Altmann and proprietary AI looking to become a monopoly. It is a Kremlin scope misinformation campaign that has been extremely successful at roping in the dopes. Don’t be a dope.

    This situation with AI tools is exactly 100% the same as every past scapegoated tool. I can create undetectable deepfakes in gimp or Photoshop. If I do so with the intent to harm or out of grossly irresponsible stupidity, that is my fault and not the tool. Accessibility of the tool is irrelevant. Those that are dumb enough to blame the tool are the convenient idiot pawns of the worst of humans alive right now. Blame the idiots using the tools that have no morals or ethics in leadership positions while not listening to these same types of people’s spurious dichotomy to create monopoly. They prey on conservative ignorance rooted in tribalism and dogma which naturally rejects all unfamiliar new things in life. This is evolutionary behavior and a required mechanism for survival in the natural world. Some will always scatter around the spectrum of possibilities but the center majority is stupid and easily influenced in ways that enable tyrannical hegemony.

    AI is not some panacea. It is a new useful tool. Absent minded stupidity is leading to the same kind of dystopian indifference that lead to the ““free internet”” which has destroyed democracy and is the direct cause of most political and social issues in the present world when it normalized digital slavery through ownership over a part of your person for sale, exploitation, and manipulation without your knowledge or consent.

    I only say this because I care about you digital neighbor. I know it is useless to argue against dogma but this is the fulcrum of a dark dystopian future that populist dogma is welcoming with open arms of ignorance just like those that said the digital world was a meaningless novelty 30 years ago.

    • SugarCatDestroyer@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 months ago

      In such a world, hoping for a different outcome would be just a dream. You know, people always look for the easy way out, and in the end, yes, we will live under digital surveillance, like animals in a zoo. The question is how to endure this and not break down, especially in the event of collapse and poverty. It’s better to hope for the worst and be prepared than to look for a way out and try to rebel and then get trapped.

        • SugarCatDestroyer@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 months ago

          Well, it may happen that these clowns will use poverty to make people want stability and voluntarily put on a collar, and if that doesn’t work, they will use force. I mean a future concentration camp. Well, and digital currencies and all this dystopian crap.

    • verdigris@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      2 months ago

      You seem to be handwaving all concerns about the actual tech, but I think the fact that “training” is literally just plagiarism, and the absolutely bonkers energy costs for doing so, do squarely position LLMs as doing more harm than good in most cases.

      The innocent tech here is the concept of the neural net itself, but unless they’re being trained on a constrained corpus of data and then used to analyze that or analogous data in a responsible and limited fashion then I think it’s somewhere on a spectrum between “irresponsible” and “actually evil”.

      • SugarCatDestroyer@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 months ago

        If the world is ruled by psychopaths who seek absolute power for the sake of even more power, then the very existence of such technologies will lead to very sad consequences and, perhaps, most likely, even to slavery. Have you heard of technofeudalism?

        • verdigris@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          2 months ago

          Okay sure but in many cases the tech in question is actually useful for lots of other stuff besides repression. I don’t think that’s the case with LLMs. They have a tiny bit of actually usefulness that’s completely overshadowed by the insane skyscrapers of hype and lies that have been built up around their “capabilities”.

          With “AI” I don’t see any reason to go through such gymnastics separating bad actors from neutral tech. The value in the tech is non-existent for anyone who isn’t either a researcher dealing with impractically large and unwieldy datasets, or of course a grifter looking to profit off of bigger idiots than themselves. It has never and will never be a useful tool for the average person, so why defend it?

          • SugarCatDestroyer@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 months ago

            There’s nothing to defend. Tell me, would you defend someone who is a threat to you and deprives you of the ability to create, making art unnecessary? No, you would go and kill him while this bastard hasn’t grown up. Well, what’s the point of defending a bullet that will kill you? Are you crazy?

          • A Wild Mimic appears!@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            I am an average person, and my GPU is running a chatbot which currently gives me a course in Regular Expressions. My GPU also generates images for me from time to time when i need an image, because i am crappy at drawing. There are a lot of uses for the technology.

            • verdigris@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 months ago

              Okay so you could have just looked up one of dozens of resources on regex. The images you “need” are likely bad copies of images that already exist, or they’re weird collages of copied subject matter.

              My point isn’t that there’s nothing they can do at all, it’s that nothing they can do is worth the energy cost. You’re spending tons of energy to effectively chew up information already on the web and have it vomited back to you in a slightly different form, when you could have just looked up the information directly. It doesn’t save time, because you have to double check everything. The images are also plagiarized, and you could be paying an artist if they’re something important, or improving your artistic abilities if they aren’t. I struggle to think of many cases where one of those options is unfeasible, it’s just the “easy” way out (because the energy costs are obfuscated) to have a machine crunch up some existing art to get a approximation of what you want.

      • A Wild Mimic appears!@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        scraping the web to create a dataset isn’t plagiarism, same with training a model on said scraped data, and calculating which words should come in what order isn’t plagiarism too. I agree that datasets should be ethically sourced, but scraping the web is something that allowed such things as the search engine to be created, which made the web a lot more useful. Was creating google irresponsible?

        • verdigris@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 months ago

          This is a wild take. You can get chatbots to vomit out entire paragraphs of published works verbatim. There is functionally no mechanism to a chatbot other than looking at a bunch existing texts, picking one randomly, and copying the next word from it. There’s no internal processing or logic that you could call creative, it’s just sticking one Lego at a time onto a tower, and every Lego is someone’s unpaid intellectual property.

          There is no definition of plagiarism or copyright that LLMs don’t bite extremely hard. They’re just getting away with it because of the billions of dollars of capital pushing the tech. I am hypothetically very much for the complete abolition of copyright and free usage of information, but a) that means everyone can copy stuff freely, instead of just AI companies, and b) it first requires an actually functional society that provides for the needs of its citizens so they can have the time to do stuff like create art without needing to make a livable profit at it. And even if that were the case, I would still think the current implementation of AI is pretty shitty if it’s burning the same ludicrous amounts of energy to do its parlor tricks.

          • A Wild Mimic appears!@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            The energy costs are overblown. An response costs about 3Wh, which is about 1 minute of runtime for a 200W Pc, or 10 Seconds of a 1000W microwave. See the calculations made here and below for the energy costs. if you want to save energy, go vegan and ditch your car; completely disbanding ChatGPT amounts for 0,0017% of the CO2 Reduction during Covid 2020 (this guy gave the numbers, but had an error in magnitude, which i fixed in my reply, calculator output is attached. It would help climate activists if they concentrated on something that is worthwhile to criticize.

            If i read a book, and use phrases out of that book in my communication, it is covered under fair use - the same should be applicable for scraping the web, or else we can close the internet archive next. Since LLM output isn’t copyrightable, i see no issues with that - and copyright law in the US is an abomination which is only useful for big companies to use as a weapon, small artists don’t really profit from that.

    • petrol_sniff_king@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 months ago

      I can create undetectable deepfakes in gimp or Photoshop.

      That is crazy, dude. You gotta teach me. There are soo many impoverished countries I wanna fuck over with this skill.

  • Aggravationstation@feddit.uk
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    2 months ago

    I can’t say for certain because I wasn’t given one but I can’t imagine me and my friends would have been willing to communicate with each other on devices provided by our school. Even in the early 00s it would have been filled with spyware.

  • RememberTheApollo_@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    2 months ago

    Talking about online privacy has become the “safe sex talk” of the last decade or so. You have to keep reminding kids so that it sticks. Nothing you say online is private, it can all be copied/screengrabbed/recorded/photographed and shared by the recipient. What you say, any images you post, etc. On school or work devices they can essentially see most everything, nothing is private. Even if you make efforts to cover your tracks, a truly determined agency with enough resources likely will find out who you are if they want to.

    • ipkpjersi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      Nothing you say online is private, it can all be copied/screengrabbed/recorded/photographed and shared by the recipient.

      Even if you fully trust the recipient, often times it can still be intercepted unless it’s end-to-end encrypted, but even then the end device can still be stolen too.

  • PokerChips@programming.dev
    link
    fedilink
    English
    arrow-up
    16
    ·
    2 months ago

    This is also why (I think) that younger people don’t like going outside. Cameras are everywhere. There’s no privacy. We’ve become a world of creeps. Not really for the most of us. But if I was 10 years old I’d think everyone as creeps.

    Now corporations are forcibly creeping into the classrooms. Yuck!

  • Zak@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    2 months ago

    Snapchat’s automated detection software picked up the comment, the company alerted the FBI, and the girl was arrested on school grounds within hours.

    Someone should tell the kids about Signal.


    As for monitoring on school computers, that seems OK to me if it’s disclosed to the students and parents in advance. What’s problematic is the responses, which seem much more focused on ass-covering than student welfare. I imagine most 13 year olds have made jokes about killing people once or twice and any adult with common sense would be able to tell they’re jokes.

  • wuffah@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    2 months ago

    What a great way to prepare students for our AI enabled social media and digital surveillance society. Take note kids, trust no one!