• nogooduser@lemmy.world
    link
    fedilink
    English
    arrow-up
    64
    ·
    10 months ago

    I wish our test team was like that. Ours would respond with something like “How would I test this?”

    • humanspiral@lemmy.ca
      link
      fedilink
      arrow-up
      5
      arrow-down
      7
      ·
      10 months ago

      Programmer should have written all the test cases, and I just run the batches, and print out where their cases failed.

      • snooggums@lemmy.world
        link
        fedilink
        English
        arrow-up
        23
        ·
        10 months ago

        Ewww, no. The programmer should have run their unit tests, maybe even told you about them. You should be testing for edge cases not covered by the unit tests at a minimum and replicating the unit tests if they don’t appear to be very thorough.

        • mspencer712@programming.dev
          link
          fedilink
          arrow-up
          6
          ·
          10 months ago

          This.

          My units and integration tests are for the things I thought of, and more importantly, don’t want to accidentally break in the future. I will be monumentally stupid a year from now and try to destroy something because I forgot it existed.

          Testers get in there and play, be creative, be evil, and they discuss what they find. Is this a problem? Do we want to get out in front of it before the customer finds it? They aren’t the red team, they aren’t the enemy. We sharpen each other. And we need each other.

        • nogooduser@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          I think that the main difference is that developers tend to test for success (i.e. does it work as defined) and that testers should also test that it doesn’t fail when a user gets hold of it.