C. Fu, R. Martin et al.
Rutgers Univ.
[PDF] |
Summary by
AF |
One-line summary:
Use compiler analysis on Java bytecode to determine (a) what faults
to inject; (b) what % of eligible catch handlers are actually exercised as
a result.
Overview/Main Points
- Use compiler-like analysis to determine where to insert faults,
and what kind, in Java inet apps.
Main approach differences from AFPI:
The most marked difference is that they rely on static analysis for two
things:
-
determining where to inject faults. For a given JNI routine
or Java method that raises some exception but doesn't handle it,
they go up the (static) call tree to find the nearest exception
handler for it (ie the one that actually would be used if the
exception happened at runtime). Compare to our approach: we
inject the exception, then examine the stack trace to see which
handler actually was used.
- Determining which data objects may be used as arguments to an
operation that could cause an exception. For example, in a
read() call, it matters whether the argument of the read is a
FileInputStream or a NetworkInputStream (eg) because that determines
which kinds of low-level faults are reasonable to inject. To
figure this out, they use a form of static analysis called points-to
analysis, which approximates this set of objects.
The kicker is that their prototype is still under construction (the
preliminary results in the paper are based on hand-simulating their
technique, ie manually determining which faults to inject & where
and then telling Mendosus to do it). They admit that they are
investigating various possible "approximate" represesntations
of the call tree (from the literature) since the full call tree can be
exponential in the size of the program.
Other points in the paper
- They point out that the term coverage already means
something different in dependability than it does in formal
verification. In dependability, coverage is defined as:
"Given that a particular fault occurs, does the system handle
it correctly?" (ie, "100% coverage"== we tried
injecting every possible kind of fault to confirm that the system
will handle them properly) In testing/verification, it refers
to what fraction of possible code paths have been executed.
- Their proposed metric is fault-catch coverage: with respect to a
particular test run and a particular catch() block, of
the possible faults that could trigger this catch() block,
what fraction of them did we actually inject?
- There is a reference for the statement that rarely-executed code
exhibits a higher failure rate than frequently-executed code.
I added it to all.bib as hecht:rare. [Hecht &
Crane, Rare Conditions and their effect on SW failures, Proc. Annual
Reliab. and Maintainability Symp., Anaheim CA, 1/94]
-
Comments from our discussion
|