Kiyavitskaya, Nadzeya and Zeni, Nicola and Mich, Luisa and Mylopoulos, John (2004) NLP-Based Requirements Modeling: Experiments on the Quality of the models. UNSPECIFIED. (Submitted)
Conceptual models are used in a variety of areas within Computer Science, including Software Engineering, Databases and AI. A major bottleneck in broadening their applicability is the time it takes to build a conceptual model for a new application. Not surprisingly, a variety of tools and techniques have been proposed for reusing conceptual models, e.g. ontologies, or for building them semi-automatically from natural language (NL) descriptions. What has been left largely unexplored is the impact of such tools on the quality of the models that are being created. This paper presents the results of three experiments designed to assess the extent to which a Natural-Language Processing (NLP) tool improves the quality of conceptual models, specifically object-oriented ones. Our main experimental hypothesis is that the quality of a domain class model is higher if its development is supported by a NLP system. The tool used for the experiment – named NL-OOPS – extracts classes and associations from a knowledge base realized by a deep semantic analysis of a sample text. Specifically, NL-OOPS produces class models at different levels of detail by exploiting class hierarchies in the knowledge base of a NLP system and marks ambiguities in the text. In our experiments, we had groups working with and without the tool, and then compared and evaluated the final class models they produced. The results of the experiments – the first on this topic – give insights on the state of the art of linguistics-based Computer Aided Software Engineering (CASE) tools and allow identifying important guidelines to improve their performance. In particular it was possible to highlight which of the linguistic tasks are more critical to effectively support conceptual modelling.
Actions (login required)