• daniskarma@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    16
    arrow-down
    3
    ·
    edit-2
    13 hours ago

    Plenty of good programmers use AI extensively while working. Me included.

    Mostly as an advance autocomplete, template builder or documentation parser.

    You obviously need to be good at it so you can see at a glance if the written code is good or if it’s bullshit. But if you are good it can really speed things up without any risk as you will only copy cody that you know is good and discard the bullshit.

    Obviously you cannot develop without programming knowledge, but with programming knowledge is just another tool.

    • Nalivai@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      9 hours ago

      I maintain strong conviction that if a good programmer uses llm in their work, they just add more work for themselves, and if less than good one does it, they add new exciting and difficult to find bugs, while maintaining false confidence in their code and themselves.
      I have seen so much code that looks good on first, second, and third glance, but actually is full of shit, and I was able to find that shit by doing external validation like talking to the dev or brainstorming the ways to test it, the things you categorically cannot do with unreliable random words generator.

      • daniskarma@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        9 hours ago

        That’s why you use unit test and integration test.

        I can write bad code myself or copy bad code from who-knows where. It’s not something introduced by LLM.

        Remember famous Linus letter? “You code this function without understanding it and thus you code is shit”.

        As I said, just a tool like many other before it.

        I use it as a regular practice while coding. And to be true, reading my code after that I could not distinguish what parts where LLM and what parts I wrote fully by myself, and, to be honest, I don’t think anyone would be able to tell the difference.

        It would probably a nice idea to do some kind of turing test, a put a blind test to distinguish the AI written part of some code, and see how precisely people can tell it apart.

        I may come back with a particular piece of code that I specifically remember to be an output from deepseek, and probably withing the whole context it would be indistinguishable.

        Also, not all LLM usage is for copying from it. Many times you copy to it and ask the thing yo explain it to you, or ask general questions. For instance, to seek for specific functions in C# extensive libraries.