• starman2112@sh.itjust.works
        link
        fedilink
        arrow-up
        17
        ·
        10 months ago

        I almost added that, but I’ll be real, I have no clue what a junior programmer is lmao

        For all I know it’s the equivalent to a journeyman or something

        • WuceBrillis@lemm.ee
          link
          fedilink
          arrow-up
          38
          arrow-down
          1
          ·
          10 months ago

          Most programmers don’t go on many journeys, it’s more like a basementman.

        • artiface@lemm.ee
          link
          fedilink
          English
          arrow-up
          25
          arrow-down
          2
          ·
          10 months ago

          Junior programmer is who trains the interns and manages the actual work the seniors take credit for.

          • slappypantsgo@lemm.ee
            link
            fedilink
            English
            arrow-up
            11
            ·
            10 months ago

            I was gonna say, if this person is making $145k, they are not a “junior” in any realistic sense of the term. It would be nice if computer programming and software development became a legitimate profession.

          • hperrin@lemmy.ca
            link
            fedilink
            English
            arrow-up
            9
            ·
            10 months ago

            This is not true. A junior programmer takes the systems that are designed by the senior and staff level engineers and writes the code for them. If you think the code is the work, then you’re mistaken. Writing code is the easy part. Designing systems is the part that takes decades to master.

            That’s why when Elon Musk was spewing nonsense about Twitter’s tech stack, I knew he was a moron. He was speaking like a junior programmer who had just been put in charge of the company.

  • Anders429@programming.dev
    link
    fedilink
    arrow-up
    59
    ·
    10 months ago

    Know a guy who tried to use AI to vibe code a simple web server. He wasn’t a programmer and kept insisting to me that programmers were done for.

    After weeks of trying to get the thing to work, he had nothing. He showed me the code, and it was the worst I’ve ever seen. Dozens of empty files where the AI had apparently added and then deleted the same code. Also some utter garbage code. Tons of functions copied and pasted instead of being defined once.

    I then showed him a web app I had made in that same amount of time. It worked perfectly. Never heard anything more about AI from him.

    • A_Union_of_Kobolds@lemmy.world
      link
      fedilink
      arrow-up
      22
      arrow-down
      2
      ·
      edit-2
      10 months ago

      AI is very very neat but like it has clear obvious limitations. I’m not a programmer and I could tell you tons of ways I tripped Ollama up already.

      But it’s a tool, and the people who can use it properly will succeed.

      I’m not saying ita a tool for programmers, but it has uses

      • Emily (she/her)@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        17
        ·
        10 months ago

        I think its most useful as an (often wrong) line completer than anything else. It can take in an entire file and just try and figure out the rest of what you are currently writing. Its context window simply isn’t big enough to understand an entire project.

        That and unit tests. Since unit tests are by design isolated, small, and unconcerned with the larger project AI has at least a fighting change of competently producing them. That still takes significant hand holding though.

        • franzfurdinand@lemmy.world
          link
          fedilink
          arrow-up
          11
          ·
          10 months ago

          I’ve used them for unit tests and it still makes some really weird decisions sometimes. Like building an array of json objects that it feeds into one super long test with a bunch of switch conditions. When I saw that one I scratched my head for a little bit.

          • Emily (she/her)@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            4
            ·
            10 months ago

            I most often just get it straight up misunderstanding how the test framework itself works, but I’ve definitely had it make strange decisions like that. I’m a little convinced that the only reason I put up with it for unit tests is because I would probably not write them otherwise haha.

            • franzfurdinand@lemmy.world
              link
              fedilink
              arrow-up
              3
              ·
              10 months ago

              Oh, I am right there with you. I don’t want to write tests because they’re tedious, so I backfill with the AI at least starting me off on it. It’s a lot easier for me to fix something (even if it turns into a complete rewrite) than to start from a blank file.

      • Susaga@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        10 months ago

        Funny. Every time someone points out how god awful AI is, someone else comes along to say “It’s just a tool, and it’s good if someone can use it properly.” But nobody who uses it treats it like “just a tool.” They think it’s a workman they can claim the credit for, as if a hammer could replace the carpenter.

        Plus, the only people good enough to fix the problems caused by this “tool” don’t need to use it in the first place.

        • CeeBee_Eh@lemmy.world
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          10 months ago

          But nobody who uses it treats it like “just a tool.”

          I do. I use it to tighten up some lazy code that I wrote, or to help me figure out a potential flaw in my logic, or to suggest a “better” way to do something if I’m not happy with what I originally wrote.

          It’s always small snippets of code and I don’t always accept the answer. In fact, I’d say less than 50% of the time I get a result I can use as-is, but I will say that most of the time it gives me an idea or puts me on the right track.

      • De Lancre@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        10 months ago

        This. I have no problems to combine couple endpoints in one script and explaining to QWQ what my end file with CSV based on those jsons should look like. But try to go beyond that, reaching above 32k context or try to show it multiple scripts and poor thing have no clue what to do.

        If you can manage your project and break it down to multiple simple tasks, you could build something complicated via LLM. But that requires some knowledge about coding and at that point chances are that you will have better luck of writing whole thing by yourself.

    • _____@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      ·
      10 months ago

      “no dude he just wasn’t using [ai product] dude I use that and then send it to [another ai product]'s [buzzword like ‘pipeline’] you have to try those out dude”

    • cantstopthesignal@sh.itjust.works
      link
      fedilink
      arrow-up
      6
      ·
      10 months ago

      I’m an engineer and can vibe code some features, but you still have to know wtf the program is doing over all. AI makes good programmers faster, it doesn’t make ignorant people know how to code.

    • frezik@midwest.social
      link
      fedilink
      arrow-up
      3
      ·
      10 months ago

      I understand the motivated reasoning of upper management thinking programmers are done for. I understand the reasoning of other people far less. Do they see programmers as one of the few professions where you can afford a house and save money, and instead of looking for ways to make that happen for everyone, decide that programmers need to be taken down a notch?

  • Lucidlethargy@sh.itjust.works
    link
    fedilink
    arrow-up
    47
    arrow-down
    2
    ·
    10 months ago

    AI is fucking so useless when it comes to programming right now.

    They can’t even fucking do math. Go make an AI do math right now, go see how it goes lol. Make it a, real world problem and give it lots of variables.

    • SynopsisTantilize@lemm.ee
      link
      fedilink
      arrow-up
      6
      ·
      10 months ago

      I have Visual Studio and decided to see what copilot could do. It added 7 new functions to my game with no calls or feedback to the player. When I tested what it did …it used 24 lines of code on a 150 line .CS to increase the difficulty of the game every time I take an action.

      The context here is missing but just imagine someone going to Viridian forest and being met with level 70s in pokemon.

    • Avicenna@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      10 months ago

      It is not, not useful. Don’t throw a perfectly good hammer to the bin because some idiots say it can build a house on its own. Just like with hammers you need to make sure you don’t hit yourself in the thumb and use it for purpose

    • cyberfae@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      10 months ago

      I find it useful for learning once you get the fundamentals down. I do it by trying to find all the bugs in the generated code, then see what could be cut out or restructured. It really gives more insight into how things actually work than just regular coding alone.

      This isn’t as useful for coding actual programs though, since it would just take more time than necessary.

      • zenpocalypse@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        10 months ago

        So true, it’s an amazing tool for learning. I’ve never been able to learn new frameworks so fast.

        AI works very well as a consultant, but if you let it write the code, you’ll spend more time debugging because the errors it makes are often subtle and not the types of errors humans make.

        • frezik@midwest.social
          link
          fedilink
          arrow-up
          6
          ·
          10 months ago

          That might be the underlying problem. Software project management around small projects is easy. Anything that has a basic text editor and a Python interpreter will do. We have all these fancy tools because shit gets complicated. Hell, I don’t even like writing 100 lines without git.

          A bunch of non-programmers make a few basic apps with ChatGPT and think we’re all cooked.

        • andz@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          10 months ago

          No doubt, I was merely suggesting that throwing math problems might not have been the intended use for what is essentially a language interpreter, obviously depending on the in question.

  • miridius@lemmy.world
    link
    fedilink
    arrow-up
    28
    arrow-down
    1
    ·
    10 months ago

    In all seriousness though I do worry for the future of juniors. All the things that people criticise LLMs for, juniors do too. But if nobody hires juniors they will never become senior

    • Grazed@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      10 months ago

      This is completely tangential but I think juniors will always be capable of things that LLMs aren’t. There’s a human component to software that I don’t think can be replaced without human experience. The entire purpose of software is for humans to use it. So since the LLM has never experienced using software while being a human, there will always be a divide. Therefore, juniors will be capable of things that LLMs aren’t.

      Idk, I might be missing a counterpoint, but it makes sense to me.

      • ChickenLadyLovesLife@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        The entire purpose of software is for humans to use it.

        The good news is that once AI replaces humans for everything, there will be no need to produce software (or anything else) for humans and AI will be out of work.

        • NιƙƙιDιɱҽʂ@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          10 months ago

          Honestly, I could see a world, not super far from now, but not right around the corner, where we’ve created automonous agent driven robots that continue carrying on to do the jobs they’ve been made to do long after the last of the humans are gone. An echo of our insane capitalistic lives, endlessly looping into eternity.

  • mindbleach@sh.itjust.works
    link
    fedilink
    arrow-up
    25
    arrow-down
    1
    ·
    10 months ago

    Everyone’s convinced their thing is special, but everyone else’s is a done deal.

    Meanwhile the only task where current AI seems truly competitive is porn.

  • null_dot@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    2
    ·
    10 months ago

    I take issue with the “replacing other industries” part.

    I know that this is an unpopular opinion among programmers but all professions have roles that range from small skills sets and little cognitive abilities to large skill sets and high level cognitive abilities.

    Generative AI is an incremental improvement in automation. In my industry it might make someone 10% more productive. For any role where it could make someone 20% more productive that role could have been made more efficient in some other way, be it training, templates, simple conversion scripts, whatever.

    Basically, if someone’s job can be replaced by AI then they weren’t really producing any value in the first place.

    Of course, this means that in a firm with 100 staff, you could get the same output with 91 staff plus Gen AI. So yeah in that context 9 people might be replaced by AI, but that doesn’t tend to be how things go in practice.

    • andioop@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      I know that this is an unpopular opinion among programmers but all professions have roles that range from small skills sets and little cognitive abilities to large skill sets and high level cognitive abilities.

      I am kind of surprised that is an unpopular opinion. I figure there is a reason we compensate people for jobs. Pay people to do stuff you cannot, or do not have the time to do, yourself. And for almost every job there is probably something that is way harder than it looks from the outside. I am not the most worldly of people but I’ve figured that out by just trying different skills and existing.

      • null_dot@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        Programmers like to think that programming is a special profession which only super smart people can do. There’s a reluctance to admit that there are smart people in other professions.

        • andioop@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          Luckily I have not met any programmer like that yet, let’s keep our fingers crossed.

          I’m willing to believe the bar to pass to be a successful programmer requires a certain level of problem-solving skill and intelligence; but that doesn’t mean no other profession has smart people. I’d imagine lots of other professions have a similar bar to pass, and even ones with lower bars to pass to succeed in that profession probably still have their prodigies and geniuses.

      • null_dot@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        10 months ago

        I’m not really clear what you’re getting at.

        Are you suggesting that the commonly used models might only be an incremental improvement but some of the less common models are ready to take accountant’s and lawyer’s and engineer’s and architect’s jobs ?

  • Wanpieserino@lemm.ee
    link
    fedilink
    arrow-up
    22
    arrow-down
    1
    ·
    10 months ago

    My mate is applying to Amazon as warehouse worker. He has an IT degree.

    My coworker in the bookkeeping department has two degrees. Accountancy and IT. She can’t find an IT job.

    At the other side though, my brother, an experienced software developer, is earning quite a lot of money now.

    Basically, the industry is not investing in new blood.

      • boonhet@lemm.ee
        link
        fedilink
        arrow-up
        2
        ·
        10 months ago

        My company was desperate to find a brand new dev straight out of the oven we could still mold to our sensibilities late last year when everything seemed doomed. Yes, it was one hire out of like 10 interviewed candidates, but point is, there are companies still hiring. Our CTO straight up judges people who use an LLM and don’t know how the code actually works. Mr. “Just use an AI agent” would never get the job.

      • Wanpieserino@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        10 months ago

        Don’t you worry, my job will be replaced by AI as well. By 2026 peppol invoices will be enforced in Belgium. Reducing bookkeepers their workload.

        ITers replacing my job: 😁😁😁

        ITers replacing their own jobs: 😧😧😧

    • fuck_u_spez_in_particular@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      10 months ago

      Basically, the industry is not investing in new blood.

      Yeah I think it makes sense out of an economic motivation. Often the code-quality of a junior is worse than that of an AI, and a senior has to review either, so they could just directly prompt the junior task into the AI.

      The experience and skill to quickly grasp code and intention (and having a good initial idea where it should be going architecturally) is what is asked, which is obviously something that seniors are good at.

      It’s kinda sad that our profession/art is slowly dying out because juniors are slowly replaced by AI.

      • Terrasque@infosec.pub
        link
        fedilink
        arrow-up
        1
        ·
        10 months ago

        Yeah, I’ve been seeing the same. Purely economically it doesn’t make sense with junior developers any more. AI is faster, cheaper and usually writes better code too.

        The problem is that you need junior developers working and getting experience, otherwise you won’t get senior developers. I really wonder how development as a profession will be in 10 years

  • gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    18
    ·
    10 months ago

    Lmfao I love these threads. “I haven’t built anything myself with the thing I’m claiming makes you obsolete but trust me it makes you obsolete”

  • digitalnuisance@infosec.pub
    link
    fedilink
    arrow-up
    17
    ·
    10 months ago

    I had a dude screaming pretty much the same thing at me yesterday on here (on a different account), despite the fact that I’m senior-level, near the top of my field and that all the objective data as well as anecdotal reports from tons of other people says otherwise. Like, okay buddy, sure. People seem to just like fighting things online to feel better about themselves, even if the thing they’re fighting doesn’t really exist.

    • Event_Horizon@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      10 months ago

      I’m a senior BA working on a project to replace some outdated software with a new booking management and payment system. One of our minor stakeholders is an overly eager tech bro who insists on bringing up AI in every meeting, he’s gone as far as writing up and sending proposals to myself and project leads.

      We all just roll our eyes when a new email arrives. Especially when there’s almost no significant detail in these proposals, it’s all conjecture based of what he’s read online…on tech bro websites.

      Oh and the best part, this guy has no experience in system development or design or anything AI related. He doesn’t even work in IT. But he researchs AI in his spare time and uses it as a side hustle…

  • maplebar@lemmy.world
    link
    fedilink
    arrow-up
    16
    ·
    10 months ago

    AI isn’t ready to replace just about anybody’s job, and probably never will be technically, economically or legally viable.

    That said, the c-suit class are certainly going to try. Not only do they dream of optimizing all human workers out of every workforce, they also desperately need to recoup as much of the sunk cost that they’ve collectively dumped into the technology.

    Take OpenAI for example, they lost something like $5,000,000,000 last year and are probably going to lose even more this year. Their entire business plan relies on at least selling people on the idea that AI will be able to replace human workers. The minute people realize that OpenAI isn’t going to conquer the world, and instead end up as just one of many players in the slop space, the entire bottom will fall out of the company and the AI bubble will burst.

  • meliante@lemm.ee
    link
    fedilink
    English
    arrow-up
    13
    ·
    10 months ago

    We’re still far away from Al replacing programmers. Replacing other industries, sure.

    Right, it’s the others that are cooked.

  • Lovable Sidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    10 months ago

    I’ve always said as a software developer that our longterm job is to program ourselves out of a job. In fact, in the long term EVERYBODY is “cooked” as automation becomes more and more capable. The eventual outcome will be that nobody will have to work. AI in its present state isn’t ready at all to replace programmers, but it can be a very helpful assistant.

    • fuck_u_spez_in_particular@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      10 months ago

      but it can be a very helpful assistant.

      can, but usually when stuff gets slightly more complex, being a fast typewriter is usually more efficient and results in better code.

      I guess it really depends on the aspiration for code-quality, complexity (yes it’s good at generating boilerplate). If I don’t care about a one-time use script that is quickly written in a prompt I’ll use it.

      Working on a big codebase, I don’t even get the idea to ask an AI, you just can’t feed enough context to the AI that it’s really able to generate meaningful code…

      • Lovable Sidekick@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        10 months ago

        I actually don’t write code professionally anymore, I’m going on what my friend says - according to him he uses chatGPT every day to write code and it’s a big help. Once he told it to refactor some code and it used a really novel approach he wouldn’t have thought of. He showed it to another dev who said the same thing. It was like, huh, that’s a weird way to do it, but it worked. But in general you really can’t just tell an AI “Create an accounting system” or whatever and expect coherent working code without thoroughly vetting it.

        • fuck_u_spez_in_particular@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          10 months ago

          I’ll use it also often. But when the situation is complex and needs a lot of context/knowledge of the codebase (which at least for me is often the case) it seems to be still worse/slower than just coding it yourself (it doesn’t grasp details). Though I like how quick I can come up with quick and dirty scripts (in Rust for the Lulz and speed/power).

        • fuck_u_spez_in_particular@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          10 months ago

          Ughh I tried the gemini model and I’m not too happy with the code it came up with, there’s a lot of intrinsities and concepts that the model doesn’t grasp enough IMO. That said I’ll reevaluate this continuously converting large chunks of code often works ok…

          • Terrasque@infosec.pub
            link
            fedilink
            arrow-up
            1
            ·
            10 months ago

            Well, it wasn’t a comment on the quality of the model, just that the context limitation has already been largely overcome by one company, and others will probably follow (and improve on it further) over time. Especially as “AI Coding” gets more marketable.

            That said, was this the new gemini 2.5 pro you tried, or the old one? I haven’t tried the new model myself, but I’ve heard good things about it.