About your test suite ...

Update: Apparently you do need a LiveJournal account to complete this poll. Damn it :(

While this poll is primarily aimed gathering information from Perl programmers, feel free to answer it if you use a different language. That being said, if you're using an XUnit framework, "number of tests" means, in this context "number of asserts" (in other words, a test method with three asserts would count as 3 tests, not one).

In the comments below, feel free to explain any answers, particularly if not all of your tests pass.

Poll #1223715 Size Does Matter

Does your primary code base have a test suite?


If you answered "no" to the above question, please explain why (150 chars)

How many tests are in the largest test suite you work with?

Fewer than 50 tests
50 to 100 tests
101 to 500 tests
501 to 2,000 tests
2,001 to 10,000 tests
More than 10,000 tests

How long does the aforementioned test suite take to run?

Fewer than 30 seconds
30 seconds to 1 minute
1 minute to 5 minutes
5 minutes to 10 minutes
10 minutes to 20 minutes
20 minutes or more

What percentage of your code is covered (rounding down)?

Don't Know
Less than 50%
50% to 74%
75% to 84%
85% to 94%
95% to 99%

Do all tests pass?


If you answered "no" to the above question, what percentage of your tests fail?

What is the primary language the test suite is in?

What test framework the test suite is based on? (e.g. Perl is probably Test::Harness and Java is often jUnit)

  • Current Mood: curious curious
  • Current Music: Ministry | Bad Blood
As an addendum, the language my test suite would be in if we had one is C++.
brentdax -- there are a couple of xUnit suites for Objective-C. There's ObjcUnit and OCUnit. Both are usable under Mac OS X and GNUSTEP environments.

By the way, I'm betting the reasons why people don't unit test will fall into 2 general areas. 1) Management discourages them for time reasons 2) The developers believe that the particular niche their software falls into "can't" be unit tested -- embedded code, GUI components, DB framework, etc.

Edited at 2008-07-15 04:54 pm (UTC)
After you've used TestNG for a bit (or if you already have) could you please let me know your thoughts on it? I was reading through it to get some ideas for TAP, but I wasn't terribly impressed with what I saw. I mean the framework is powerful enough, but there's a bit to be desired. For example, from this Web site, an example of output XML:

<test-method status="PASS" signature="test2()" name="test2" duration-ms="0"
             started-at="2007-05-28T12:14:37Z" description="someDescription1"

Well, great. We've now duplicated the duration by having a start and end time and a duration. What happens if they don't match? I've found nothing about this in the description.

And if we look at that full xml to see context:

    <suite name="Suite1">
            <group name="group1">
                <method signature="com.test.TestOne.test2()" name="test2" class="com.test.TestOne"/>
                <method signature="com.test.TestOne.test1()" name="test1" class="com.test.TestOne"/>
            <group name="group2">
                <method signature="com.test.TestOne.test2()" name="test2" class="com.test.TestOne"/>
        <test name="test1">
            <class name="com.test.TestOne">
                <test-method status="FAIL" signature="test1()" name="test1" duration-ms="0"
                             started-at="2007-05-28T12:14:37Z" description="someDescription2"
                    <exception class="java.lang.AssertionError">
                            ... Removed 22 stack frames
                <test-method status="PASS" signature="test2()" name="test2" duration-ms="0"
                             started-at="2007-05-28T12:14:37Z" description="someDescription1"
                <test-method status="PASS" signature="setUp()" name="setUp" is-config="true" duration-ms="15"
                             started-at="2007-05-28T12:14:37Z" finished-at="2007-05-28T12:14:37Z">

Wow. That's breath-takingly ugly. I compare that to the equivalent in TAP with our optional YAML diagnostic syntax:

not ok 1 - someDescription2
    signature: test1()
    name: test1
    start: 2007-05-28T12:14:37Z
    end: 2007-05-28T12:14:37Z
        class: java.lang.AssertionError
        stacktrace: |
            ... Removed 22 stack frames
ok 2 - someDescription2
    signature: test2()
    name: test2
    start: 2007-05-28T12:14:37Z
    end: 2007-05-28T12:14:37Z
        is-config: true

The beauty of that is a TAP parser only needs to see this:

not ok 1 - someDescription2
ok 2 - someDescription2

And everything else is optional gravy, making it an extremely useful test protocol. You can implement the core of a complete parser in just a few lines of code and add more as you go. Regrettably, we're still waiting on Schwern's work on Test::Builder 2 before we can get the YAML diagnostics completely working :/

Does TestNG have any way of marking tests as "skip" or "todo"? (There's a way to exclude sets of tests, but that's not quite the same thing as "skip".
By the way, I've been using TestNG too, but I'm not too advanced a programmer, ahaha, only thing I can say about it - it starts up much slower than jUnit. It seems to run fine in suites, but single tests take about 40 seconds if run separately. Though maybe it is only the specifics of that repository I'm currently working with.
Oops. Didn't see the sentence "That being said, if you're using an XUnit framework, "number of tests" means, in this context "number of asserts" (in other words, a test method with three asserts would count as 3 tests, not one)."

Then it'd be "501 to 2,000 tests", I guess.
You can change your poll answers by clicking on the poll number at the top, then on "Fill out poll".
If the answer to the first question is no, we can't really answer any of the others.

Our code is mostly not structured appropriately to run isolated tests, and as a department we're still learning both how to do that and how to design appropriate tests. I think we may end up with bigger tests against a test database, first.
In that case, high-level integration tests targeting the UI or API can nail down basic behavior before you refactor to more testable code. UI and API integration tests are typically harder to debug, but they're great for finding bugs that unit tests can't catch.
In a related note, if anyone knows of a UI test for XUL applications, I'd be thrilled if they would tell me.
...answers based on the last time I was working on a code base of any significance, which is to say at my last job. I am developing a new set of tools now from scratch, for which the answers will be mostly the same except for the number of tests (currently no idea) and the percentage of code covered (well over 50%, hopefully 100%, but in practice as close as 100% as I can get without feeling like I'm wasting time.)
How am I to answer questions 2 through 7, when the answer to 1 is "no"?
There is no "none of the above" or "not applicable" option.
I don't really have a primary codebase, so I answered for the most recent module I've uploaded to the CPAN and which I think is actually finished - CPU::Emulator::Z80.

We don't have a primary codebase at work either - there's several independant ones.
Until recently I was the only developer on what could be considered our primary code base; when I inherited the code there was no test suite, no version control, and all work was done in production. I've slowly (very slowly) built a test suite over the past year, though it is still greatly lacking. Now that we have a second developer to work on some of the backlog, I've started to move my focus to testing.