So, the short story about the disruption in implementing Tag
that
I alluded to is that I got sidetracked with other things, and couldn't spare
the attention to Tag
for the best part of two weeks. The specific
details why aren't important, though — in a real world,
dev-shop environment, similar derailments happen for any number of reasons:
Temporary changes in priorities, critical bugs that need attention right
now, the list of possible reasons is probably huge. The important
part is what the effects of the derailment were:
- Development was paused mid-stream;
- It was set aside for a relatively long period, and despite making some efforts to at least try to keep track of where I'd left off, I ended up being away from it for too long to remember what I was thinking at the time;
- I hadn't gotten through completely documenting (or in a few cases,
even loosely defining) many of
Tag
's members; - I still had a fair number of unit-tests that I'd had to defer because of dependencies with other classes;
Tag
after finishing the unit-testing
of Namespace
, minus some dependencies on Tag
.
Some of this disruption could have been avoided, maybe, if I'd chosen to build
out the concrete classes in a different order. As a result of my decision to take
them in the order I did, I was left in the position of waiting to finish the
unit-testing of Namespace
until I was done with implementing Tag
because of some dependencies between the two:
I'd rather get all of the unit-testing resolved at once afterIn retrospect, that was maybe not the best decision I could've made. I haven't gone back yet to look at the interconnections of theTag
is done, and themarkup
module is more complete.
markup
classes
to see if there was a better, less troublesome path I could have taken,
so it may well be that there isn't one. I'm going to plan to do that
after I finish the module, and if there is anything worthwile that I discover, I'll
be sure to post it.
Where That Left Me
As a result, I spent a day or so thrashing about trying to figure out where I'd
left off and what my next steps were. Not unlike coming into a project that some
other developer has started in a real-word
development position, really.
And there is where having solid unit-testing policies and practices came to the
rescue, I think. With what had been completed, it was a matter of a few
minutes to set up the testTag
test-case class and fire off a test-run.
That gave me a couple lists of missing test-methods, one for properties of Tag
,
and one for its methods. It took maybe another half hour or so to stub out all of
those test-methods, so that all of them were returning failures, and then another
couple of days to work my way through all of the failing tests and get them to pass.
All told, there were three types of tasks I had to undertake, guided by those tests:
- Implement unit-tests on properties and methods that had been completed;
- Implement unit-tests on properties and methods that were implemented, but also broken in some fashion; and
- Implement unit-tests that revealed that I hadn't even defined
the
Tag
-member that they were supposed to test, then implement those missing members;
- I had member-requirements enforced by the test-methods;
- I had, at least in some cases, functional requirements for those members that were easily converted into usable test-methods;
- In the cases where I didn't have solid functional requirements, I could refer to the JavaScript API that I was trying to maintain consistency with for most members; and
- In the (few) remaining cases where I was creating new functionality, I had documented what those members were supposed to do, or had a very good idea what I wanted them to be capable of.
Because I was feeling pressed for time, I didn't think to capture a lot of the results of those initial test-runs. In fact, it wasn't until I'd gotten a fair way through them that it occurred to me that what I was going through might be worth posting about, and by that time my results looked like this:
######################################## Unit-test results ######################################## Tests were successful ... False Number of tests run ..... 219 + Tests ran in ......... 0.02 seconds Number of errors ........ 0 Number of failures ...... 92 ########################################
By the time Tag
was complete, that had grown a bit:
######################################## Unit-test results ######################################## Tests were successful ..... False Number of tests run ....... 225 + Tests ran in ........... 0.12 seconds Number of errors .......... 0 Number of failures ........ 16 Number of tests skipped ... 77 ########################################I'd also decided that I wanted to be able to see both a summary of the number of test-methods that I'd explicitly skipped, and some details about those skipped test-methods. A lot of them were skipped because they were the various getter-, setter- or deleter-methods for propeties that were completely tested in the test-method for the property. I'd also force-skipped the items that were dependent on an implementation of some other class that I hadn't gotten to yet (
MarkupParser
):
######################################## SKIPPED #--------------------------------------# test_DelParent (__main__.testBaseNode) - _DelParent is tested in testparent test_GetParent (__main__.testBaseNode) - _GetParent is tested in testparent ... test_SetParent (__main__.testBaseNode) - _SetParent is tested in testparent ... test_DelinnerHTML (__main__.testTag) - ## DEFERRING until MarkupParser is implemented ... test_GetinnerHTML (__main__.testTag) - ## DEFERRING until MarkupParser is implemented ... test_SetinnerHTML (__main__.testTag) - ## DEFERRING until MarkupParser is implemented ... testcloneNode (__main__.testTag) - ## DEFERRING until MarkupParser is implemented testinnerHTML (__main__.testTag) - ## DEFERRING until MarkupParser is implemented ... ######################################## FAILURES
The take-away from this entire story, for me, boiled down to
Having a unit-testing policy, and processes that implement that policy, can help a developer resume their efforts after a disruption as well as ensuring that changes to code didn't break anything.
Even with that, though, Tag
still ended up taking longer to finish
than I'd expected — an argument, perhaps, for picking the sequence of classes
for development more carefully...
Other Things that I Encountered or Thought Of
Here's a potentially-useful trick, with some back-story. BaseNode
implements some concrete functionality that's inherited by CDATA
,
Comment
, Tag
and Text
— I'll use the
nextElementSibling
property as the example, but there are seven other
properties that have the same relationship to the same derived classes. In order
to really test those properties, there needs to be a class that has a
complete, concrete implementation that derived from BaseNode
.
Normally, in building out unit-tests for an abstract class like BaseNode
,
I'd also define a derived class as part of the unit-testing module (BaseNodeDerived
,
for example), and would use instances of that class as the test-objects in the
various test-methods for those concrete items. At some point while I was working
through the pile of unit-tests for Tag
, it occurred to me that it
would be possible (though maybe not desirable in this case) to have one of the
test-methods for BaseNode
require test-methods in the test-case
classes for its derived classes instead. That didn't turn out to be very useful
in this case, since BaseNode
ended uup being an abstract class with
no abstract members (something that I'll have to think on later), but the concept
seemed, for a while, to be sound enough that I had code that would do just that:
def testnextElementSibling(self):
"""Unit-tests the nextElementSibling property of a BaseNode instance."""
# It makes little sense to test here, since that would require
# spinning up derived (and potentially broken) classes and there
# are *actual* classes where the tests can be run, so require tests
# in those test-case classes here instead.
testCases = [ testCDATA, testComment, testTag, testText ]
testName = 'testnextElementSibling'
missingCases = [
c.__name__ for c in testCases if not hasattr( c, testName )
]
self.assertEqual( missingCases, [],
'testBaseNode requires "%s" test-cases in %s' %
( testName, ', '.join( missingCases ) )
)
Ultimately, since Tag
(and CDATA
, Comment
and Text
) derive from BaseNode
, it was possible to build
useful test-methods for all of the incomplete BaseNode
test-methods
using instance of those classes, so this approach wasn't actually needed.
Probably just as well: Though I could show that it worked to my satisfaction,
it still ended up relying a bit too much on a human making a decision (which items to
require tests for, in this case) to maintain some certainty of the code-coverage
I'm striving for. That said, I may well come back to that idea, perhaps implementing
it as a decorator-method that can be applied to test-case classes, the way AddMethodTesting
and AddPropertyTesting
a work right now.
A Missing Property Example: ownerDocument
One of the missing members I discovered as a result of the big list of members
of Tag
was the ownerDocument
property. In all honesty,
I simply missed implementing, or even requiring it, so the unit-testing
approach mentioned above didn't catch it — it was purely human observation
and effort that revealed it. Since it was also a common property for all of the
BaseNode
-derived classes, and something that should be common to all
node-types, even if they aren't derived from BaseNode
, I required it
as an abstract property in IsNode
:
#-----------------------------------#
# Abstract Properties #
#-----------------------------------#
nextElementSibling = abc.abstractproperty()
nextSibling = abc.abstractproperty()
nodeName = abc.abstractproperty()
nodeType = abc.abstractproperty()
ownerDocument = abc.abstractproperty()
parentElement = abc.abstractproperty()
parentNode = abc.abstractproperty()
previousElementSibling = abc.abstractproperty()
previousSibling = abc.abstractproperty()
textContent = abc.abstractproperty()
Once that was in place, though, the unit-testing policies picked up that it was
missing immediately raising failures resembling this one, from
testCDATA
:
######################################## ERRORS #--------------------------------------# Traceback (most recent call last): File "test_markup.py", line 361, in test__init__ testObject = CDATA( testValue ) TypeError: Can't instantiate abstract class CDATA with abstract methods ownerDocument
From there, implementing it in BaseNode
was simple:
@describe.AttachDocumentation()
def _GetownerDocument( self ):
"""
Returns the top-most IsElement object in the instance's DOM tree"""
if not self._parent:
return self
else:
return self._parent.ownerDocument
This deviates from the JavaScript ownerDocument
, though, and I'm not
sure if I'll keep it as is, or alter it later when I get to implementing
BaseDocument
: In JavaScript, as far as I've ben able to determine,
there is no way to create an element that is not a member of a document —
even if that element hasn't been attached to the DOM of the document. All parsed
tags are automatically document-members, and the createElement
method
is only available to the document. Taken together, these effectively
prevent an element from not being a member of the document they were
created in. In the idic
framework (so far), it's possible to create
Tag
s even if no document has been defined. I'll have to ponder
on that, but for now, I'll let it stand.
Once again, the unit-testing policies picked up that there was still something missing:
#--------------------------------------# Traceback (most recent call last): File "unit_testing.py", line 332, in testMethodCoverage target.__name__, missingMethods AssertionError: Unit-testing policy requires test-methods to be created for all public and protected methods, but testBaseNode is missing the following test-methods: ['test_GetownerDocument'] #--------------------------------------# Traceback (most recent call last): File "unit_testing.py", line 373, in testPropertyCoverage 'methods: %s' % ( target.__name__, missingMethods ) AssertionError: Unit-testing policy requires test-methods to be created for all public properties, but testBaseNode is missing the following test-methods: ['testownerDocument'] #--------------------------------------#As annoying as this might seem, it really was a good thing — The unit-testing processes and policies set up back at the beginning of last month were catching that changes had been made, that unit-testing for those changes wasn't complete, and that work needed to be done because of those changes.
That feels like a validation of my unit-testing policies to me.
Two other properties came to my attention while organizing the big Tag
members-table: While I'd set up tests for the nodeName
and
nodeType
properties in the testHasTextData
test-case
class, there were no corresponding tests in testCDATA
,
testComment
or testText
.
Again, fixing that didn't take much effort. The example structure for the test-methods
for all of the concrete classes looked almost the same as the test-methods for
CDATA
:
def testnodeName(self):
"""Unit-tests the nodeName property of a CDATA instance."""
testObject = CDATA( 'test-instance' )
self.assertEquals( testObject.nodeName, CDATA._nodeName,
'CDATA does not have a defined _nodeName attribute, or is '
'inheriting the default None value from HasTextData.'
)
def testnodeType(self):
"""Unit-tests the nodeType property of a CDATA instance."""
testObject = CDATA( 'test-instance' )
self.assertEquals( testObject.nodeType, nodeTypes.CDATASection,
'CDATA does not have a defined _nodeType attribute, or is '
'inheriting the default None value from HasTextData.'
)
Once the underlying class-attributes had been defined for all three concrete classes,
and a few other minor things that I noticed that were buried in the 20+ failures from
Namespace
and BaseNode
tests were cleaned up, things were
back to a reasonable/expected number of failures and errors.
The moral of this story? Perhaps the idea of embedding the node-types and -names
as class properties, then retrieving them with common getter-methods in
HasTextData
was... too clever, maybe? At the time it felt fairly
elegant — Store the actual values in the classes themselves, keeping them
nicely encapsulated, etc., etc. But, when push came to shove, the combination of
that storage-approach and the coverage-testing routines left a hole that a bug
slipped through. It was pure, dumb luck that I happened to notice it when I did.
Getting the Remaining Tests Running
Eventually, after I got done with Tag
's implementation and had
reconciled all of the expected
missing tests, I got to a point where the
test-run yielded only a handful of failures:
######################################## Unit-test results ######################################## Tests were successful ..... False Number of tests run ....... 226 + Tests ran in ........... 0.14 seconds Number of errors .......... 0 Number of failures ........ 8 Number of tests skipped ... 77 ########################################Those failures included a variety of items, including:
#--------------------------------------# testCodeCoverage (__main__.testmarkupCodeCoverage) AssertionError: Unit-testing policies require test-cases for all classes and functions in the idic.markup module, but the following have not been defined: (testAttributesDict) #--------------------------------------# testMethodCoverage (__main__.testNamespace) AssertionError: Unit-testing policy requires test-methods to be created for all public and protected methods, but testNamespace is missing the following test-methods: ['testGetNamespaceByName', 'testGetNamespaceByURI', 'testRegisterNamespace', 'test_DelDefaultRenderingModel', 'test_DelName', 'test_DelTagRenderingModels', 'test_DelnamespaceURI', 'test_GetDefaultRenderingModel', 'test_GetName', 'test_GetTagRenderingModels', 'test_GetnamespaceURI', 'test_SetDefaultRenderingModel', 'test_SetName', 'test_SetTagRenderingModels', 'test_SetnamespaceURI'] #--------------------------------------# testPropertyCoverage (__main__.testNamespace) AssertionError: Unit-testing policy requires test-methods to be created for all public properties, but testNamespace is missing the following test-methods: ['testDefaultRenderingModel', 'testName', 'testTagRenderingModels', 'testnamespaceURI'] #--------------------------------------#I handled all of these failures with the normal unit-test-definition process.
Since I was already in a unit-testing frame of mind, I went ahead and dealt
with all of the remaining outstanding test-failures that weren't part
of Tag
's test-case as well. That leaves me with a clean slate, more
or less, for the next post, with the following results:
######################################## Unit-test Results: idic.markup #--------------------------------------# Tests were SUCCESSFUL Number of tests run ....... 253 Number of tests skipped ... 95 Tests ran in .......... 0.140 seconds #--------------------------------------#
######################################## Unit-test Results: idic #--------------------------------------# Tests were SUCCESSFUL Number of tests run ....... 276 Number of tests skipped ... 95 Tests ran in .......... 0.318 seconds #--------------------------------------#
The unit_testing
module had some minor modifications, not much
more than some restructuring of the test-results reporting, really, with the addition
of the code that counted and displayed details on skipped tests. The current
test-results for test_markup
is long enough that I didn't want to
just dump it into the post, but I still want to share it, so it's downloadable
as well.
No comments:
Post a Comment