- cross-posted to:
- news@lemmy.world
- cross-posted to:
- news@lemmy.world
-“If you’re an AI Cop, you have to tell me. It’s the law.”
-“I’m not a cop.”Seems reasonable to me. If you’re using AI then you should be required to own up to it. If you’re too embarrassed to own up to it, then maybe you shouldn’t be using it.
I’m stoked to see the legal definition of “AI”. I’m sure the lawyers and costumed clowns will really clear it all up.
What about my if else AI algorithm?
It’s not really an llm
IMO if your “A*” style algorithm is used for chatbot or any kind of user interaction or content generation, it should still be explicitly declared.
That being said, there is some nuance here about A) use of Copyrighted material and B) Non-deterministic behaviour. Neither of which is (usually) a concern in more classical non-DL approaches to AI solutions.
Its insane how a predictive chat bot model is called AI
I mean, we call the software that runs computer players in games AI, so… ¯\_(ツ)_/¯
The AI chatbot brainrot is way worse tbh.someone legit said to me why don’t chatgpt cure cancer like wtf
Do we? Aren’t they just bots? Like I’m not looking at an NPC and calling it AI.
USA is run by capitalist grifters. There is no objective meaning under this regime. It’s all just misleading buzzwords and propaganda.
Marketing
It would be nice if this extended to all text, images, audio and video on news websites. That’s where the real damage is happening.
Actually seems easier (probably not at the state level) to mandate cameras and such digitally sign any media they create. No signature or verification, no trust.
I get what you’re going for but this would absolutely wreck privacy. And depending on how those signatures are created, someone could create a virtual camera that would sign images and then we would be back to square one.
I don’t have a better idea though.
The point is to give photographers a “receipt” for their photos. If you don’t want the receipt it would be easy to scrub from photo metadata.
Privacy concern for sure, but given that you can already tie different photos back to the same phone from lens artifacts, I don’t think this is going to make things much worse than they already are.
someone could create a virtual camera that would sign images
Anyone who produces cameras can publish a list of valid keys associated with their camera. If you trust the manufacturer, then you also trust their keys. If there’s no trusted source for the keys, then you don’t trust the signature.
No signature or verification, no trust
And the people that are going to check for a digital signature in the first place, THEN check that the signature emanates from a trusted key, then, eventually, check who’s deciding the list of trusted keys… those people, where are they?
Because the lack of trust, validation, verification, and more generally the lack of any credibility hasn’t stopped anything from spreading like a dumpster fire in a field full of dumpsters doused in gasoline. Part of my job is providing digital signature tools and creating “trusted” data (I’m not in sales, obviously), and the main issue is that nobody checks anything, even when faced with liability, even when they actually pay for an off the shelve solution to do so. And I’m talking about people that should care, not even the general public.
There are a lot of steps before “digitally signing everything” even get on people’s radar. For now, a green checkmark anywhere is enough to convince anyone, sadly.
It could be a feature of web browsers. Images would get some icon indicating the valid signature, just like browsers already show the padlock icon indicating a valid certificate. So everybody would be seeing the verification.
But I don’t think it’s a good idea, for other reasons.
An individual wouldn’t verify this but enough independent agencies or news orgs would probably care enough to verify a photo. For the vast majority we’re already too far gone to properly separate fiction an reality. If we can’t get into a courtroom and prove that a picture or video is fact or fiction then we’re REALLY fucked.
I think there’s enough people who care about this that you can just provide the data and wait for someone to do the rest.
I’d like to think like that too, but it’s actually experience with large business users that led me to say otherwise.
The problem is that “AI” doesn’t actually exist. For example, Photoshop has features that are called “AI”. Should every designer be forced to label their work if they use some “AI” tool.
This is a problem with making violent laws based on meaningless language.
Yes the state should violently enforce its arbitrary laws in every aspect of our lives. \s
So does the EU AI act
My LinkedIn feed is 80% tech bros complaining about the EU AI Act, not a single one of whom is willing to be drawn on which exact clause it is they don’t like.
Oh, so just like with the GDPR, cool.
Ok, my main complaint about GDPR is that I had to implement that policy on a legacy codebase and Im pretty sure I have trauma from that.
Sounds like that codebase was truly awful for user privacy then.
Incredibly so, yes.
Skill issue.
My point is higher than yours, get on my level
My LinkedIn feed
Yes… it’s so bad that I just never log in until I receive a DM, and even then I login, check it, if it’s useful I warn people I don’t use LinkedIn anymore then log out.
I even ignore DMs on linkedIn, they’re mostly head hunters anyway.
Not a terrible resource when you’re actually looking for a job. But that’s because all the automated HR intakes are a dumpster fire, more than anything headhunters bring in value.
I get it though, if you’re an upstart. Having to basically hire an extra guy just to do ai compliance is a huge hit to the barrier of entry
That’s not actually the case for most companies though. The only time you’d need a full time lawyer on it is if the thing you want to do with AI is horrifically unethical, in which case fuck your little startup.
It’s easy to comply with regulations if you’re already behaving responsibly.
That’s true with many regulations. The quiet part that they’re trying to avoid saying out loud is that behaving ethically and responsibly doesn’t earn them money.
It’s comforting to know that politicians in the EU also have no clue what “AI” is.
Why do you say that
bleep bloop… I am a real human being who loves doing human being stuff like breathing and existing
How about butt stuff?
garbage in, garbage out
Nice
And if it hallucinates?
Straight to jail
That depends.
Devils advocate here. Any human can also hallucinate. Some of them even do it as a recreational activity
Pretty sure that people who hallucinate are kidnapped and thrown in cages.
Yeah, and the people who pay those people tend to get really mad if they do that at work.
That might end like the cookie popups in the eu…
Same old corporations will ignore the law, pay a petty fine once a year, and call it the cost of doing business.
Be sure to tell this to “AI”. It would be a shame if this was a technical nonsense law to be.
I am of the firm opinion that if a machine is “speaking” to me then it must sound a cartoon robot. No exceptions!
I want my AI to sound like a Speak & Spell.
I propose that they must use vocaloid voices or that old voice code that Wasteland 3 uses for the bob the robot looking guys.
i would like my GPS to sound like Brian Blessed otherwise i want all computers to sound like Niki Yang
Will someone please tell California that “AI” doesn’t exist?
This is how politicians promote a grift by pretending to regulate it.
Worthless politicians making worthless laws.
If you ask ChatGPT, it says it’s guidelines include not giving the impression it’s a human. But if you ask it be less human because it is confusing you, it says that would break the guidelines.
ChatGPT doesn’t know its own guidelines because those aren’t even included in its training corpus. Never trust an LLM about how it works or how it “thinks” because fundamentally these answers are fake.
What if it’s foreign AI ?
Yeah for real, what does this mean exactly? All forms of machine learning? That’s a lot of computers at this moment, it’s just we only colloquially call the chat bot versions “AI”. But even that gets vague do reactive video game NPCs get counted as “AI?” Or all of our search algorithms and spell check programs?
At that point what’s the point? The disclosure would become as meaningless as websites asking for cookies or the number of things known to cause cancer in the state of California.