Improving the Quality of Conceptual Models with NLP Tools: An Experiment

Mich, Luisa and Mylopoulos, John and Zeni, Nicola (2002) Improving the Quality of Conceptual Models with NLP Tools: An Experiment. UNSPECIFIED. (Unpublished)

[img]
Preview
PDF
Download (413Kb) | Preview

    Abstract

    Conceptual models are used in a variety of areas within Computer Science, including Software Engineering, Databases and AI. A major bottleneck in broadening their applicability is the time it takes to build a conceptual model for a new application. Not surprisingly, a variety of tools and techniques have been proposed for reusing conceptual models (e.g., ontologies), or for building them semi-automatically from natural language descriptions. What has been left largely unexplored is the impact of such tools on the quality of the models that are being created. This paper presents the results of an experiment designed to assess the extent to which a Natural Language Processing (NLP) tool improves the quality of conceptual models, specifically objectoriented ones. Our main experimental hypothesis is that the quality of a domain class model is higher if its development is supported by a NLP system. The tool used for the experiment -- named NL-OOPS -- extracts classes and associations from a knowledge base realized by a deep semantic analysis of a sample text. Specifically, NL-OOPS produces class models at different levels of detail by exploiting class hierarchies in the knowledge base of a NLP system and marks ambiguities in the text. In our experiments, we had groups working with/without the tool, and then compared and evaluated the final class models they produced.

    Item Type: Departmental Technical Report
    Department or Research center: Information Engineering and Computer Science
    Subjects: Q Science > QA Mathematics > QA075 Electronic computers. Computer science
    Report Number: DIT-02-047
    Repository staff approval on: 21 Jan 2003

    Actions (login required)

    View Item