Abstract | A growing body of literature suggests that there is an optimal size for software components. This means that components that are too small or too big will have a higher defect content (i.e., there is a U-shaped curve relating defect content to size). The U-shaped curve has become known as the Goldilocks Conjecture.. Recently, a cognitive theory has been proposed to explain this phenomenon, end it has been expanded to characterize object-oriented software. This conjecture has wide implications for software engineering practice. It suggests (1) that designers should deliberately strive to design classes that are of the optimal size, (2) that program decomposition is harmful, and (3) that there exists a maximum (threshold) class size that should not be exceeded to ensure fewer faults in the software. the purpose of the current paper is to evaluate this conjecture for object-oriented systems. We first demonstrate that me claims of an optimal component/class size (1 above) and of smaller components/classes having a greater defect content (2 above) are due to a mathematical artifact in the analyses performed previously. We then empirically test the threshold effect claims of this conjecture (3 above). To our knowledge, the empirical test of size threshold effects for object-oriented systems has not been performed thus far. We perform an initial study with an industrial C++ system, and replicated it twice on another C++ system and on a commercial Java application. Our results provide unambiguous evidence that mere is no threshold effect of class size. We obtained the same result for three systems using 4 different size measures. These findings suggest that there is a simple continuous relationship between class size and faults, and mat optimal, smaller classes are better, and threshold effects conjectures have no sound theoretical nor empirical basis. |
---|