Over the last year I’ve been learning Swift and starting to put together some iOS apps. I’d definitely class myself as a Swift beginner.

I’m currently building an app and today I used ChatGPT to help with a function I needed to write. I found myself wondering if somehow I was “cheating”. In the past I would have used YouTube videos, online tutorials and Stack Overflow, and adapted what I found to work for my particular usage case.

Is using ChatGPT different? The fact that ChatGPT explains the code it writes and often the code still needs fettling to get it to work makes me think that it is a useful learning tool and that as long as I take the time to read the explanations given and ensure I understand what the code is doing then it’s probably a good thing on balance.

I was just wondering what other people’s thoughts are?

Also, as a side note, I found that chucking code I had written in to ChatGPT and asking it to comment every line was pretty successful and a. big time saver :D

  • @krixcrox@programming.dev
    link
    fedilink
    English
    32 years ago

    Back in the day they used to look things up in books, then the internet came along and you didn’t need these heavy books anymore to look something up, you just typed it into a search engine, and today we use ChatGPT to do the “searching”(obviously it’s not actually searching on the internet, but you get what I mean) for us. It’s just another step in making coding and learning coding easier and more accessible.

  • JackbyDev
    link
    fedilink
    English
    14
    edit-2
    2 years ago

    No, it’s not cheating, but also please don’t blindly trust it. Random people on the internet can be wrong too but people can at least correct them if they are. Stuff ChatGPT outputs is fresh for your eyes only.

    Edit: typo

    • @mrkite@programming.dev
      link
      fedilink
      English
      72 years ago

      Agreed. While I’ve never used ChatGPT on an actual project, I’ve tested it on theoretical problems and I’ve never seen it give an answer that didn’t have a problem.

      So I would treat it like any answer on Stack Overflow, use it as a start, but you should definitely customize it and fix any edge cases.

      • JackbyDev
        link
        fedilink
        English
        12 years ago

        It also seems to depend a lot on how popular what you’re asking about is. I asked it some questions about docker and it helped me understand some nuances between different commands in Dockerfiles that I was having trouble with. Docker is pretty widely used. I then asked it some questions about how to use the jpackage command from Gradle and it couldn’t help at all.

  • @Dazawassa@programming.dev
    link
    fedilink
    English
    62 years ago

    I really don’t think so. You are asking it how to write a function. It explains how the function works and sometimes even how to expand on it. You still have to intergrate that function into your program yourself and tailor it to the purpouse of the program. It’s far quicker than Stackoverflow giving 8 functions that don’t work.

  • @axtualdave@lemmy.world
    link
    fedilink
    English
    62 years ago

    ChatGPT is, at least for the moment, just a really fancy snippet repository with an search function that works really well.

    Is re-using code someone else wrote cheating? Nah.

    But, no matter where you get the code from (cough Stackoverflow), if you use it without understanding what it’s doing, you’re not doing yourself any favors.

    • @Deely@programming.dev
      link
      fedilink
      English
      72 years ago

      I just want to add that ChatGPT is a “really fancy snippet repository” that sometimes, randomly lies to you.

  • @JokersPistol@programming.dev
    link
    fedilink
    English
    102 years ago

    Yes and no. If your goal is to learn how to code manually, then you are “cheating” in that you may not learn as much.

    If your goal is to learn how to utilize AI to assist you in daily tasks, I would say you’re not.

    If your goal is to provide value for others through how much you can produce in a given amount of time, then you’re definitely not.

  • @SuspectApe@programming.dev
    link
    fedilink
    English
    12 years ago

    Cheating? What test or game are you playing that it might be considered cheating? Honestly, I feel it’s a legitimate tool to build applications and it can help teach you along the way. If you can stomach using a Microsoft tool then Bing Chat might be an even better option. It’s the same technology with a better, IMO, data set.

  • @truthy@programming.dev
    link
    fedilink
    English
    7
    edit-2
    2 years ago

    No, it’s not cheating. But you are expected to understand what your code does and how.

    And this brings us to the explanations it provides. Keep in mind that these AI tools excell in producing content that seems right. But they may very well be hallucinating. And just as for code, small details and exact concepts matter.

    I would therefore recommend you to verify your final code against official documentation, to make sure you actually understand.

    In the end, as long as you don’t trust the AI, neither for solutions or knowledge, its just another tool. Use it as it fits.

    • Baldur Nil
      link
      fedilink
      English
      12 years ago

      I’d go as far as saying you should know what every line of code does or you’re risking the whole thing to have unexpected side effects. When you understand what the code is doing, you know what parts you should test.

  • The Bard in GreenA
    link
    fedilink
    English
    02 years ago

    Not using it will make it awfully hard to compete with all the devs who ARE using it.

    Asking to ChatGPT to write comments is a GREAT idea!

  • @SuperNerd@programming.dev
    link
    fedilink
    English
    82 years ago

    I’m dealing with a new service written by someone who extensively cut and pasted from ChatGPT, got it to “almost done – just needs all the operational excellence type stuff to put it into production”, and left the project.

    Honestly we should have just scrapped it and rewritten it. It’s barely coherent and filled with basic bugs that have wasted so much time.

    I feel maybe this style of sloppy coding workflow is better suited to front end coding or a simple CRUD API for saving state, where you can immediately see if something works as intended, than backend services that have to handle common sense business logic like “don’t explode if there is no inventory” and etc.

    For this dev, I think he was new to the language and got in a tight feedback loop of hacking together stuff with ChatGPT without trying to really understand each line of code. I think he didn’t learn as much as if he would have applied himself to reading library and language documentation, and so is still a weak dev. Even though we gave him an opportunity to grow with a small green field service and several months to write it.

    • @funbike@programming.dev
      link
      fedilink
      English
      2
      edit-2
      2 years ago

      I wouldn’t consider the bugs chatgpt’s fault, per se. The same could happen by blindly copy/pasting from SO or a template Github project. If you are copy/pasting from anywhere, it’s even more important that you have good automated tests with good coverage, and that you take extra time to understand what you pasted.

      One of the things I do is generate high level tests first, and then the implementation code. This way I know it works, and I can spend extra time reviewing the test code first to make sure it has the correct goal(s).

      Learning is another matter. Personally, ChatGPT has greatly accelerated my learning of libraries and other languages. I’ve also used it to help me grok a block of complex code, and to automatically comment and refactor complex code into something more understandable. But it can also be used as a crutch.

  • I would view ChatGPT as just an extension of stack overflow and Google. At the end of the day you still have to plug it into your broader code base and that’s what makes a good programmer. That and debugging the issues you get after

  • @TeaHands@lemmy.world
    link
    fedilink
    English
    6
    edit-2
    2 years ago

    If you understand the code and are able to adapt it to for your needs it’s no different to copy pasting from other sources, imo. It’s just a time saver.

    If you get to the point where you’re blindly trusting it with no ability to understand what it’s doing, then you have a problem. But that applied to Stack Overflow too.

  • @canpolat@programming.dev
    link
    fedilink
    English
    32 years ago

    No, it’s not cheating (unless you are using it to do your homework, I guess). It’s a tool and like any other we learn how to use it appropriately.

    But one needs to be aware of other ethical concerns related to using AI generated code. The discussion revolves around companies (OpenAI, Github, etc.) training their models using the code written by people who have not consented use of their code as training data. In some cases, licensing is clear and allows for such use, but in some cases it’s debatable (I’m not that much involved in those discussions, so I cannot provide more details).

    When creating software, the value we bring is the understanding of a problem and the ability to ask the correct questions that will bring us to a good solution. In simple scenarios, even a machine can do what we do and we should definitely use the machine instead of spending time on that.