I assume you, teaching as the profession that we have today is not at all safe.
Cooking is something that requires advanced robotics or some kind of heavily modular factory-like automated meal production line, not AI. Though AI certainly could assist in the development of such.
Drivers are being actively replaced right before our eyes.
A lot of Lawyer work is already being heavily automated, even without AI. Outside of that its “technically” replaceable with AI but on a literal legal level not likely currently possible. I think automating some aspects of being a lawyer might be beneficial but certain elements would be down right dystopian if fully automated.
Doctor work being automated is also already being done, but this is arguably a very good thing, as it maybe holds the key to a lot of medical breakthroughs and might unlock the potential to sort all that personal medical data people collect ever since that became a thing. And largely might help significantly reduce the cost of highly effective personal healthcare, given sufficient time.
Teacher work probably could be partially automated but getting kids to pay attention to a lesson, discipline, safety, etc would likely require a human to be around if only for liability.
modular factory-like automated meal production line, not AI.
Define AI… LLMs are just a part of that.
Yeah, Artificial Intelligence is pretty broad category of technologies, even so, robotics and automation is not AI. You could pair a robot or an automated factory with an AI of some kind, or use an AI to design them, and they’re related to each other in that they involve computer technology. Still, not the same thing.
A robotic arm in an car factory is a robot, but it doesn’t have AI in it, they’re usually given a set of commands to repeat.
A rube goldberg machine is technically automated once initialized. Its not AI.
Any personals here?
Meanwhile my (college btw) teacher suggests us to use ChatGPT if we need help. Bro wants to replace himself.
AI hasn’t replaced Translators and the attempt to use them to replace artists and journalists isn’t going as well as you would assume. AI isn’t replacing any skilled position. Anyone who told you it will, is selling you something or dreadfully ignorant on the topic.
The rich will always have money to pay better people to make beautiful things for them
Just be useful to the rich and you’ll survive
Just like they planned it
I’d rather make them fertilizer
I just watched a movie (Geostorm) where these obviously super wealthy people were in a skyscraper and the movies like “oh no, they might die if no one stops this!”
Good? I’m more concerned about all the people below them getting swept away. These rich fucks should finally feel fear for fucking once.
Zero argument here
I still think that all jobs are, in general, safe for the foreseeable future. But we will be expected to use AI tools and just produce more and more, so that a few people will gain more and more resources and power.
E.g. as engineers we will do less and less actual planning, but we will run AIs like it were a team of engineer slaves.
And I think this will be similar for other branches. A music composer will run AIs to compose parts of a song, adjust it, readjust other parts, till the song is good. I mean, afaik this is already how much of it works.
Personal is a career?
Probably a hallucination of the AI that generated this
I assumed it was supposed to be Personal Assistant, but the text got cut off.
When I see these kinds of posts I just look over at the vibe coders and just laugh harder than any joke about ai taking our jobs
Except Vibe-Coders are kicking back & sipping margaritas & your job is still gone
deleted by creator
Lol. Vibe coders aren’t taking anyone’s job. There have always been shitty engineers and now we just call them vibe coders.
I wanted robots to do my menial unpleasant chores for me so I’d have more time to do art, writing, and analytics. I didn’t want robots to do all the art, writing, and analytics so I had more time for chores & menial tasks 😭
Oh man is translation not possible with AI. You have no idea how little languages have in common. A lot of terms don’t mean a thing, but combine concepts you don’t have or associate to point at a thing.
My dad said, about learning a new language, ‘‘cat means cat, not gato, don’t translate’’ and I think that holds up pretty well from my experience.
I mean given that “AI” are language models built on context and relations between words I’d argue that that’s one of the more applicable jobs compared to what’s listed in OP. With none of them is it capable of doing well, but I just wouldn’t argue that translation is outside that realm of what’s listed above
Automation and job replacement is a good thing. The reason it feels bad is because we’ve tied the ability to satisfy our basic needs to employment. In an economic model that actually isn’t a dystopian hellscape, robots replacing jobs is something to celebrate.
And to switch our economic model to one in which a person can thrive without pissing the vast majority of our lives away on the grind; we just need to pull ourselves up by our bootstraps!
Why is lemmy filling up with AI posts? Its worse that this is on c/comicstrips
It’s not a Lemmy thing, it’s a global phenomenon. Humans are using AI more than ever, and believe it or not, humans use Lemmy.
But its not a gradual change. AI posts used to be rare, in 2 days i found more AI posts outside of a community made for AI generated pictures than in the 2 years i have used lemmy
That’s because this is the first time AI comics have been passable. The quality simply wasn’t there before.
Yeah humans are still far better, but this could be considered “good enough”.
This isnt just comics. c/politicalmemes has so many ai generated images
I think the point of this comic in particular is to show that AI is already taking over art but since it’s done badly, at what cost is it taking over these jobs?
When on c/comicstrips, i dont think its unreasonable for people to expect the art to be from real people
“Theft” only applies to the poor. Rich assholes and their megacorps will pay judges to tell you so
Those images look nothing alike unless you stop looking beyond the contrasted regions… Which, fair enough, could indicate someone taking the outline of the original, but you hardly need AI to do that (Tracing is a thing that has existed for a while), and it’s certainly something human artists do as well both as practice, but also just as artistic reinterpretation (Re-using existing elements in different, transformative ways).
It’s hard to argue the contrast of an image would be subjective enough to be someone’s ownership, whether by copyright or by layman’s judgement. It easily meets the burden of significant enough transformation.
It’s easy to see why, because nobody would confuse it with the original. Assuming the original is the right, it looks way better and more coherent. If this person wanted to just steal from this Arcipello, they’re doing a pretty bad job.
EDIT: And I doubt anyone denies the existence of thieves, whether using AI or not. But this assertion that one piece can somehow make sweeping judgements about multi-faceted tech by this point at least hundreds of thousands if not millions of people are using, from hobbyist tinkerers to technical artists, is ridiculous.
AI can absolutely produce copyrighted content if it’s prompted to. Name drop an artist in Midjourney and you will be able to prompt their style - see this list of artists and prompted images. So you can just tweak the settings a bit to heavily weight their name, generally describe the composition of the work you’re looking to approximate, and you can absolutely produce something close to their original works.
The image is wrong because the original artwork is not stolen. It is part of a dataset by LAION (or another similar dataset, basically a text-image pair where the image is linked at its original source). To train the imagegen, its company had to download a temporary copy, which is exempt from infringement by copyright law. There is no original artwork somewhere in a database accessible by Midjourney, just the numerical relationship generated by the image-text pair it learned from.
On the other hand, AI can obviously produce content in violation of copyright - like here. But that’s specifically being prompted by the user. You can see other examples of this with Grok generating Mickey Mouse and Simpsons characters. As of right now, copyright violations are the legal responsibility of the users generating the content - not the AI itself.
I think you meant to respond to someone else, as I pretty much agree(d) with everything you’re saying and have not claimed otherwise. In fact in my very post I did say in more layman terms it was very likely this person used img2img or controlnet to copy the layout of the image, I think it’s less likely they got something this similar unguided, although it’s possible depending on the model or by somehow locking the prompt onto the original work.
But the one point I do disagree with is that this is a violation of copyright, as I explained before. For it to be a violation it would need to look substantially more similar to the original, the one consistent element between the two is the rough layout of the image (the contrasted areas), for the rest most of the content is very different. You notice the similarity of the contrasted area much more easily by it being sized down so much.
I hope you understand, as you seem to be more knowledgeable than the people that downvoted without leaving a comment, but you are allowed to use ideas and concepts from others without infringing on their work, as without it the creative industry literally couldn’t function. And yes, this is the responsibility on anyone using these models to avoid.
This person skirts too close in my eyes by pretty much 1:1 copying the layout, but it’s almost certainly still fine as again, a human doing this with an existing piece of work would also be (eg. the many replica’s / traces of the Mona Lisa).
Hell, if you take a look at the image in this very lemmy post, which was almost certainly taken from someone else, it has a much better case of copyright infringement, since it has the same layout, nearly identical people in the boxes, general message and concepts.
But in the end, copyright is different per jurisdiction and sometimes even between judges. Perhaps there is a case somewhere. It’s just (in my opinion) very unlikely to succeed based on the limited elements that are substantially similar.
EDIT: Added the section about the Mona Lisa replica’s for further clarification.
Hm yeah on second look the images aren’t as comparable as I expected. I just saw the general composition in the thumbnails and assumed more similarity. I do think they probably prompted the original artist in the generated work, though, which kind of led to my thoughts in my op.
Yeah that’s also fair enough conclusion, I think it’s a bit too convenient the rest of the image looks a lot worse (Much more clear signs of botched AI generation) while the layout remains pretty much exactly the same, which to me looks like selective generation.
You are speaking bollocks, there are already many lawsuits by artists against the so called Ai engines, there are boundaries on how much you can copy from a specific artwork, logo, design or whatever, for example if you take the coca cola logo and slightly change it even if it doesn’t say coca cola you will still face the laws of copyright infringement, nobody denies the existence of thieves, so that’s why people do whatever they can to protect their work
Lawsuits, yes. But a lawsuit is not by default won, it is a assertion for the court to rule on. And so far regarding AI, none have been won. And yes, there are boundaries on when work turns into copyright infringement, but those have specific criteria, and regions of contrast do not suffice by any measure. Yes, even parts of the Coca Cola logo can be reinterpreted without infringing. Why do you think so many off brands skirt as close as possible to it without infringing?
They don’t! And most of those lawsuits are still in process
Thats what I said, yes.
Everyone thinks their own line of work is safe because everyone knows the nuances of their own job. But the thing that gets you is that the easier a job gets the fewer people are needed and the more replaceable they are. You might not be able to make a robot cashier, but with the scan and go mobile app you only need an employee to wave a scanner (to check that some random items in your cart are included in the barcode on your receipt) and the time per customer to do that is fast enough that you only need one person, and since anyone can wave a scanner you don’t have much leverage to negotiate a raise.