Posters
The poster exhibition covers a broad range of topics addressing object oriented technology, theory and applications. It is an excelent setting for authors to express new ideas and present ongoing work. The presentation format fosters direct interaction between the authors and the whole group of attendees, complementing the specialized work-groups of ECOOP workshops.
Nr Authors Title P1 C. Steindl Program Slicing P2 K. Böllert AOP/ST: An Aspect-Oriented Programming Environment for Smalltalk P3 S. Drossopoulou, T. Valkevych, S. Eisenbach Java Type Soundness Extended to Exceptions P4 M. Collins-Cope, A. O'Callaghan A Process for Object and Component Based Development of Information Systems P5 M. Collins-Cope, A. O'Callaghan The Requirements/Service/Interface Approach to Use Case Analysis P6 M. Senger A General Purpose CORBA-Java based Application Wrapper P7 M. Beurton-Aimar, B. Le Blanc Using Patterns into the Conception of a Medical KBS:
LADRI, a software for Diagnosis in RheumatologyP8 I. Oliver, S. Kent Validation of Object-Oriented Models using Animation P9 A. Abelló, M. Oliva, M. Rodríguez F. Saltor The BLOOM Model Revisited: An Evolution Proposal P10 C. Damm, K. Hansen, M. Thomsen, M. Tyrsted The Knight Project P11 Sven-Eric Lautemann The COAST object database system
P1: Program Slicing
Author
Abstract
We implemented a program slicing tool for static forward slicing of object-oriented programs written in the programming language Oberon-2. Program slicing uses control flow and data flow information to visualise dependences and assist the programmer in debugging and in program understanding. We did not restrict the language in any kind which means that we had to cope with structured types (records and arrays), global variables of any type, objects on the heap, side-effects of function calls, nested procedures, recursion, dynamic binding due to type-bound procedures (methods) and procedure variables (function pointers), and modules.Two distinctive features of our Slicer are the support for slicing large systems (intermodular slicing) and the usage of user-feedback during the computation of data flow information (combining static and dynamic information, yielding semi-dynamic slices). We extended the notion of interprocedural slicing to intermodular slicing. Information that has been computed once is reused when slicing other modules that import previously sliced modules. Furthermore, we support object-oriented features such as inheritance, type extension,
polymorphism, and dynamic binding.The user interface of our Slicer uses active text elements for the following purposes: bi-directional hypertext links between the call sites and all procedures that may be called. reaching definitions - all reaching definitions are collected in a popup menu element. By selecting a definition from the popup element, this definition is shown in the code. polymorphic choice nodes - the user can enable and disable links to procedures at polymorphic call sites. parameter summary information - in the formal parameter lists of procedures parameter summary information is available via parameter info elements. The usage of the parameters is indicated by an arrow (up arrow for definition without usage (OUT parameter), down arrow for usage without definition (IN parameter), up/down arrow for usage and definition (VAR parameter), down error with exclamation mark to indicate an anomaly for value parameters like "parameter is never used" or "assignment to value parameter", up/down arrow with exclamation mark to indicate an anomaly for reference parameters like "possibly not set")). By clicking on these arrows of output parameters, the procedure is sliced intraprocedurally for this parameter.
URL
http://www.ssw.uni-linz.ac.at/Staff/CS/Slicing.html
P2: AOP/ST: An Aspect-Oriented Programming Environment for Smalltalk
Author
Abstract
The implementation of technical requirements like persistence, failure handling, communication, or process synchronization is often spread throughout a whole software system instead of being well localized in a few modules. To capture technical concerns more appropriately, aspect-oriented programming (AOP) introduces a new modularization construct called aspect. Aspects are written in special aspect description languages and are merged automatically with the rest of a program by a tool named Aspect Weaver. The benefits of using AOP are easier initial development as well as better maintainability and reusability of software.AOP/ST provides aspect-oriented programming extensions for the Smalltalk development environment VisualWorks. So far, two aspect languages have been implemented. They allow developers to express:
Developing aspects is eased by included tools, which are closely integrated into VisualWorks. Moreover, the Aspect Weaver is open for the addition of further aspects. AOP/ST also comes with reusable components that facilitate the implementation of new aspect languages.
- synchronization of processes that concurrently access shared objects.
- tracing of a program's flow of execution.
URL
http://www.germany.net/teilnehmer/101,199268/
P3: Java Type Soundness Extended to Exceptions
Authors
Abstract
This paper presents a further stage in our ongoing research into the semantics of the Java core language and soundness of its type system.We develop a proof of type soundness for a substantial subset of Java. This is the first work, in our knowledge, which studies types for exception handling in the Java setting. Furthermore, our current approach is simpler than previous work, is nearer to the Java syntax and allows modelling separate compilation.
We define Javas, a provably safe subset of Java containing primitive types, classes and inheritance, instance variables and instance methods, interfaces, shadowing of instance variables, dynamic method binding, object creation, the null value, arrays, exceptions and exception handling. Execution of Java programs requires some type information at run-time e.g. method descriptors. For this reason, we define Javase, an enriched version of Javas containing compile-time type information necessary for the execution of method calls and field accesses. Execution of run-time terms may produce terms which are not described by Javase. We therefore extend Javase, obtaining Javar, which describes run-time terms.
We distinguish between normal execution, where any exception thrown is also handled, and abnormal execution, where an exception is thrown and not handled. We also distinguish normal types, which describe the possible outcomes of normal execution, and abnormal types which describe the possible outcomes of abnormal execution. A normal type is a class, an interface, or a primitive type. An abnormal type is a set of subclasses of the predefined EXCEPTION class.
Of course, whether a term will execute normally or abnormally is not predictable. But what is predictable, is that if the term executes normally, then types will be preserved up to subclasses/subinterfaces, whereas if it executes abnormally it will throw an exception object whose class is a subclass of one of the classes in its abnormal type.
We distinguish between the types for Javase, Javase and Javar, and describe normal and abnormal types separately, thus obtaining six inference systems. We combine a normal type and an abnormal type to form the type of a term, and we extend the widening relationship to these ``combined'' types. Thus, we obtain an elegant, concise formulation of the subject reduction theorem with the meaning described in the previous paragraph.
URL
http://outoften.doc.ic.ac.uk/projects/slurp
P4: A Process for Object and Component Based Development of Information Systems
Authors
Abstract
This poster display will outline a tailorable process for the development of
Information Systems using UML. The poster(s) will bring out:
- The structuring of the process - phases, activities, tasks and deliverables.
- Cross checks and refinement rules between deliverables.
- How the process separates the problem domain (business requirements) and technical domain aspects (infrastruture and architecture) of modelling the required system, to maximise parallel activity.
- How the process facilitates the identification of re-usable components, both in the problem domain (re-usable business components) and within the technical domain (re-usable architectural components).
- How the process facilitates buy, build or build for re-use decisions.
- A fairly prescriptive approach to deliverables.
- How the techniques of UML (class models, sequence diagrams, package diagrams, etc.) should be used at different points in the process.
- Role division within the process.
- Use of specification type object models within the process model being formally cross-referenced by use case and object method definitions.
P5: The Requirements/Service/Interface Approach to Use Case Analysis
Authors
Abstract
This poster display will outline (in pattern format) the RSI approach to use case analysis. The RSI approach to use case analysis adds a degree of extra formalism to the use case analysis process, with a view to answering questions such as:
- How do report layouts, user interface designs, external system file interfaces, etc. fit in with the use case approach to defining system requirements?
- How can one link the definition of UI designs, reports and external file interfaces to an approach to use case analysis that formally cross-references specification object models
- How can use case analysis be structured to encourage çomponent' type re-use of systems.
- How can 'formal' use case analysis be linked to business process definition.
P6: A General Purpose CORBA-JAVA based Application Wrapper
Author
Abstract
AppLab is an automatically generated wrapper of command-line driven applications. It provides a uniform graphical user interface for the applications by using CORBA (for communication) and Java (for GUI). The main goal of AppLab is to develop a distributed object system, which provides an easy-to-use and well-defined access to a large set of existing command-line-driven applications of different types. This is particularly true in the bioinformatics, but not limited to.AppLab defines the IDL interfaces for invoking and controlling the remote applications and for browsing their results. The IDL is general and the same for all applications - and it will reflect the results of several OMG RFPs after they will be adopted.
The design (and the implementation) uses a meta-data approach to describe application parameters and other properties. This allows automatic extensibility and flexibility because new applications are added to the system by Java-code generators. The meta-data are expressed both in XML and in a semi-proprietary format used in bioinformatics domain (www.gcg.com).
AppLab generated code behaves like Java beans, and such as, it can be easily incorporated into other Java programs which can use AppLab to invoke various applications to process their own data.
AppLab is free software, continuously being improved. Further information is available from http://industry.ebi.ac.uk/applab.
URL
http://industry.ebi.ac.uk/applab
P7: Using Patterns into the Conception of a Medical KBS: LADRI, a software for Diagnosis in Rheumatology
Authors
Abstract
Our purpose is to present some problems linked to the design of medical diagnostic systems. Medical Knowledge Based Systems must be able to store information, to search for a specific disease, to consult validated clinical tables, and to compare the results for several diseases. This suggests the production of strongly inter-connected process modules, sharing a common base of information. The design of an abstract model of the knowledge base makes possible this sharing and allows to specify diagnostic processes regardless the data implementation. Object-oriented methods of analysis are powerful tools to realize this task. Nevertheless, significant elements remain hidden during the stage of first analysis. Conception of software requires the definition of objects which do not exist as full entities in a human cognitive system, such as algorithm models linked to data structures, or description of generic behaviours. Patterns proceed from these expectations, giving a new approach of the modeled domain and providing some specific solutions to problems of design.LADRI (Logiciel d'Aide au Diagnostic en Rhumatologie Inflammatoire) is a software for diagnosis in Rheumatology, developed at the Bordeaux Medical University. We describe 3 patterns present at different levels in the object model. The patterns Composite and State are used for the management of medical signs. The pattern Iterator defines generic class of treatments.
Composite is introduced to structure both the patient and diseases descriptions. This application must take into account diseases described from simple signs and aggregations of signs. Using Composite allows to consider an aggregation of objects like a primitive object. It makes possible to implement all the signs into a single way. It offers an unique access to these objects whatever their internal structure.
In a disease description, a value for each sign is assigned. It means the weight of this sign for the diagnosis. But usually, this value also codes implicitly the concept of presence or absence of a sign (with a negative value for absence). State represents explicitly the different possible states of a sign. It is an alternative to the implementation of this state inside the value.
The pattern Iterator concerns a generic task performed by all reasoning methods. The diagnostic task is mainly an enumeration of all patient signs observed, to compare them with signs belonging to the descriptions of pathologies. This task is involved whatever the structure of these definitions or the applied operations in the reasoning module. The introduction of Iterator gives up a generic access to the element independently from the data structures. Iterator allows a communication between data modules and treatment modules.Patterns make up the object-oriented analysis and give a different point of view of the system which is more efficient and more oriented towards the architecture of the software. With patterns, the object model was improved by the addition of abstract objects relevant to the problem. They make easier the design of a flexible and evolutive diagnosis support system. They further the introduction of software engineering principles in softwares relevant to Artificial Intelligence.
P8: Validation of Object-Oriented Models using Animation
Authors
Abstract
The combination of graphical notations for object-oriented modelling and formal methods has resulted in notations suitable for precise modelling. However current validation techniques for these so called second generation OO modelling techniques are either limited or non-existent.Prototyping has been shown to be a valuable validation technique during analysis and design. This technique, however, can suffer from the semantic-gap between the model and the prototypical executable code being too great for useful feedback between the code and model. There does exist a technique known as animation where the model can be executed (in some sense) without the need from the writing of prototypical code. Using animation we can investigate the behavioural properties of a model without resorting to writing prototypical code and, therefore, provide almost instant and direct feedback to the analyst/designer and/or domain expert.
Because animation has required a precisely specified model, current animation tools are based around formal specification languages such as Z, VDM and B. Work on the usefulness of animation has shown it to be valuable in refining the specification and removing potential ambiguities and errors at a much earlier stage. The disadvantage is that the formal methods, animation has been applied to, are primarily mathematical in nature and `non-expert unfriendly.'
The Unified Modelling Language (UML) is a graphical OO modelling notation with a co-existing textual specification language known as the Object Constraint Language (OCL). The OCL - based upon Z - is written using a `non-expert friendly' English-like syntax and integrates completely with the UML.
Using the UML's object diagrams and the OCL's action contract specifications, the analysis/designer can describe the behavioural properties of a model. Animation of these constructs can provide instant, visual feedback on the effects of that specification. In conjunction with relating these constructs to the UML class diagram and associated invariants, this can provide a way of validating and refining models without resorting to writing prototype code at a much earlier stage in the software process.
This poster will present ongoing work to provide a method and framework to facilitate the animation of object oriented models specified using UML/OCL.
URL
http://www.cs.ukc.ac.uk/people/rpg/ijo1/research/
P9: The BLOOM Model Revisited: An Evolution Proposal
Authors
Abstract
The growing need to share information among several autonomous and heterogeneous DataBases (DBs) has became an active research area. A possible solution to satisfy this need of cooperation is providing integrated access throught a Federated DataBase System (FBDS). In order to provide integrated access, it is necessary to overcome semantic heterogeneities, and represent related concepts. This is acomplished through an integration process in which a Canonical Data Model (CDM) plays a central role.Once argued the desirable characteristics of a suitable CDM [SCG91], the BLOOM model (BarceLona Object Oriented Model) was progresively defined [CKSG94], [CSG94], [GSC95]. It results in an extension of an object oriented model with a semanticaly rich set of abstractions. Concretely, it contributes to:
BLOOM was not developed as a whole but suffered extensions in different phases. Its abstractions were conceived for building the FDBS in as needed basis. It drove to a lack of unity and differences in the nomenclature.
- the generalization/specialization dimension by offering four different kinds of specialization, and the tagged of the specializations by means of a criterion.
- the aggregation dimension by considering existence dependencies among objects; depending on the semantic strength of these dependencies, composed objects appear.
The necessity of revising the BLOOM model outcroped during the design process of the directory of the FDBS. It is essential to have such storage system because of the amount of needed information in building and operating a FDBS. The directory is the core of our FBDS architecture and it must contain the different schema levels as well as the mappings among them [ROSC97]. Therefore, the model had to be fixed in order to store those schemas and mappings in a structured manner. This revision consisted on:
Finally, the BLOOM model itself has been used as metamodel to describe not only its own generalization/specialization dimension, as was previously done, but its aggregation dimension, as well.
- unifying and restructuring some of the main concepts in the model.
- considering the collection as a concrete case of composition.
- generalizing the usage of the different kinds of collection (i.e. partitioning, covering, disjoint, and general) to any kind of existence dependency.
- improving the classification of some concepts in the model to better understand them, spliting specializations by new criteria.
Other work in progress related to BLOOM model includes its implementation on bject Oriented DataBase Management System (OODBMS) as well as on a Relation DataBase Management System (RDBMS); and the design of a Graphical User Interface (GUI) to support the design and definition of BLOOM database schemas.
URL
Bibliography at http://www.lsi.upc.es/bloom
P10: The Knight Project: Supporting Collaboration in Object-Oriented Analysis and Design
Authors
Abstract
A cardinal point in object-oriented software development is model building (Madsen et al., 1993). During development, relevant concepts in the problem domain are modelled and the problem domain model is continually created, modified and extended. We do not believe that this problem domain model should be produced by a single developer isolated from the problem domain and the systems end-users. Rather it should be produced as a collaborative effort involving several OO developers. Also, end-users involved in cooperative design (Greenbaum et al., 1991) of the system, should be involved in the modelling activity as they have extensive domain knowledge.Thus, tools, techniques, and approaches that support an object-oriented, incremental, evolutionary, and cooperative process become crucial. Currently this support is not adequate. Basically, two types of technologies exist for supporting object-oriented modelling, namely whiteboards and CASE tools. Use of whiteboards is informal, supports synchronous collaboration, and enables extensions of notations. CASE tools, on the other hand, combine a formal notation with the possibility for asynchronous collaboration, tool integration and code generation. To facilitate effective collaboration between developers and domain experts in modelling activities, these two technologies need to be combined, which the Knight project aims at.
In the project, we have made continous studies of actual modelling practice. During the studies, a number of lessons have been learned. First, designers combine formal drawings with informal elements either as "freehand" drawings or as formal additions to the notation used. Second, filtering of information is used to maximise use of whiteboard real estate, to show only important elements of a diagram, and as semantic filtering. Third, all collaborative construction of models has been coordinated as turn-taking. We believe this is general for the kind of work that object-oriented analysis and design is all about. What is however not coordinated via turn-taking, is verbal communication and use of other artefacts.
To investigate tool support, we have built and evaluated a prototype called Knight, based on these observations. The major enabling technology for the implementation of Knight is a SMART Board (from SMART Tech) -- a large (110x150 cm), touch sensitive computer screen. The basic idea is that combining the SMART Board with gesture input (Rubine, 1991) should retain much of the ease of use that a whiteboard provides while still providing CASE tool functionality. By sketching "boxes and lines" on the large screens, elements of UML Class diagrams (Rumbaugh et al., 1999) can be created and manipulated. A large screen real estate is achieved by combining zoom with a radar view. We intent to enhance this by combining a notion of (semantically) zoomable interfaces (Benderspon et al., 1994) with fish-eye views (Bederson et al., 1994) thus providing a way of zooming while keeping peripheral awareness of the whole model.
Our preliminary evaluations with using the tool have been positive and have shown that the basic idea of combining the SMART Board with gesture input provides an interaction that is both effective and easy to understand. The tool, however, will have to be evaluated further in real development contexts before further conclusions can be made.
URL
http://www.daimi.au.dk/~marius/knight
P11: The COAST object database system
Author
Abstract
Since July 1996 the Complex Object And Schema Transformation (COAST) project at the University of Frankfurt focuses on the design and implementation of a schema evolution support system using the versioning concept both at the instance and at the schema level of object-oriented database management systems (OODBMS).Change Management is highly recommended for large and complex schemas of object oriented databases.
The versioning approach to schema evolution offers a high degree of flexibility. In ordinary systems two elements depend heavily on the database schema, and therefore prevent schema changes in many cases: The database which is stored according to the types specified in the schema and the set of application programs on top of the schema which require certain properties from the instances. The basic idea of the versioning mechanism is to keep the old schema and database state as a version to allow continuous operation of existing application programs. This decouples the schema from existing instances and applications, and allows arbitrary schema updates at any time.
At the schema level, a general update taxonomy provides primitives to modify a schema. However, instead of overriding the previous state it is kept as an old schema version that can be continuously used.
At the instance level, objects are propagated between the access scopes of different schema versions by various types of class specific (default and user-defined) conversion functions [Lau97b]. Their (re-)execution is triggered by object creations, modifications, and deletions and can be specified by propagation flags [Lau97a]. This allows cooperation of applications of different schema versions.
At the physical layer the propagation of instances is done in a deferred fashion, i.e. objects are propagated only when necessary to safe both time and space. However, the propagation mechanisms preserve time-equivalence, i.e. at the logical level the behaviour is equivalent to an immediate propagation.
The architecture of the COAST prototype (see figure) is similar to those of both other prototypes and of commercial OODBMS. It consists of a database engine and a user interface.
The database engine contains an Object Manager and a Schema Manager which extend the functionality offered by other systems. The Object Manager is based on EOS and supports a generalized object versioning model where versions of the same object can have different types as required by the schema versioning approach. The Schema Manager handles schema versions consisting of classes interrelated by inheritance and association links and provides the schema update primitives to derive new schema versions. The deferred physical propagation is handled by the Propagation Manager which executes conversion functions when necessary.
In addition to the Object Definition Language (ODL) the Schema Editor (implemented in Java) offers a graphical interface to the Schema Manager to develop new schema versions and to specify propagation flags and conversion functions.
The COAST OODBMS has been described in detail in [LEW97] and was presented at CeBIT'98 and '99 [BL98, Lau99a]. The schema versioning concepts are general and can be applied also to role models [Lau99b]. A prototype implementation can be shown at ECOOP'99.
URL
http://www.dbis.informatik.uni-frankfurt.de/~coast/