16 research outputs found

    Domain and Specification Models for Software Engineering

    Get PDF
    This paper discusses our approach to representing application domain knowledge for specific software engineering tasks. Application domain knowledge is embodied in a domain model. Domain models are used to assist in the creation of specification models. Although many different specification models can be created from any particular domain model, each specification model is consistent and correct with respect to the domain model. One aspect of the system-hierarchical organization is described in detail

    System Analysis as Scientific Inquiry

    Get PDF
    Information systems are understood as models or representations of an application domain. System analysis is a mode of inquiry for purposes of understanding the domain and effecting change in it. Scientific inquiry also aims for understanding and description of domain. In contrast to system analysis, scientific inquiry is based on a 4000-year history. Its processes and methods are well accepted and arguably successful. This paper explores the parallels between the two processes and shows implications of viewing system development as a kind of scientific inquiry. A descriptive survey of system development in organizations presents empirical indications as to whether these parallels are in fact observed in system development practice

    Reuse: A knowledge-based approach

    Get PDF
    This paper describes our research in automating the reuse process through the use of application domain models. Application domain models are explicit formal representations of the application knowledge necessary to understand, specify, and generate application programs. Furthermore, they provide a unified repository for the operational structure, rules, policies, and constraints of a specific application area. In our approach, domain models are expressed in terms of a transaction-based meta-modeling language. This paper has described in detail the creation and maintenance of hierarchical structures. These structures are created through a process that includes reverse engineering of data models with supplementary enhancement from application experts. Source code is also reverse engineered but is not a major source of domain model instantiation at this time. In the second phase of the software synthesis process, program specifications are interactively synthesized from an instantiated domain model. These specifications are currently integrated into a manual programming process but will eventually be used to derive executable code with mechanically assisted transformations. This research is performed within the context of programming-in-the-large types of systems. Although our goals are ambitious, we are implementing the synthesis system in an incremental manner through which we can realize tangible results. The client/server architecture is capable of supporting 16 simultaneous X/Motif users and tens of thousands of attributes and classes. Domain models have been partially synthesized from five different application areas. As additional domain models are synthesized and additional knowledge is gathered, we will inevitably add to and modify our representation. However, our current experience indicates that it will scale and expand to meet our modeling needs

    Domain analysis within the GenSIF framework

    Get PDF
    The GenSIF framework which is targeted towards very large, distributed, and complex software systems recently has been proposed to accomplish a form of systems engineering and systems development in which the issue of systems integration is considered from the beginning on. One of the components of GenSIF is domain analysis. Domain analysis leads to the design of a domain model. The specific needs GenSIF has in that area were investigated with an emphasis on domain modeling. Main points addressed in that investigation were the issue regarding the relevant information for the domain modeling process and the required type of domain model. Based on these results, an approach to domain modeling for GenSIF was developed that provides a specific graphical notation which allows to create a semiformal kind of domain model. A few modeling examples for the application domain university department were designed to evaluate this notation. In addition, the major aspects of the application of a computer based tool with respect to domain analysis as a concept of GenSIF were analysed

    Desarrollo dirigido por modelos de aplicaciones móviles multiplataforma a partir de un conjunto de reglas heurísticas basadas en esquemas preconceptuales

    Get PDF
    ilustraciones, diagramas, tablasModel driven development (MDD) approaches aim for increasing development team productivity and decreasing software time-to-market. Such approaches comprise a set of model-to-model and model-to-text transformation rules for generating the source code based on models. Some authors propose MDD approaches for cross-platform mobile applications. So, we perform a systematic literature review looking for MDD approaches for cross-platform mobile applications, having as a result 39 primary studies grouped on 19 different MDD approaches. We observe 100.0 % approaches lack close-to-natural modeling languages, 36.8 % approaches lack design patterns, and 84.2 % lack usability features. In addition, 42.1 % approaches use out-of-date programming languages as automation result. Therefore, we propose an MDD approach for cross-platform mobile applications by using pre-conceptual schemas. Such schemas allow for guaranteeing a close-to-natural modeling language, including design patterns, and including usability features. Moreover, we complete the UN-LEND specification language as an intermediate model between pre-conceptual schemas and cross-platform mobile applications, avoiding the usage of out-of-date programming languages. Then, we design a pre-conceptual-schema-based metamodel in order to develop an MDD prototype based on the Eclipse Modeling Framework and XPAND. We propose a set of heuristic rules divided into model, view, and controller layers, having the pre-conceptual schema as a rule left-hand side, UN-LEND as an intermediate model, and Java-Android and Swift-iOS code as a rule right-hand side. We validate our approach by using a case of study about MobileSQUARE: an Android application for requirements gathering based on a question answering model. As a result, we automatically generate 90.86 % of the MobileSQUARE application by using our approach. Specifically, we observe model layer is close to be fully automated having 98.95 % as automation percentage compared to view and controller layers with 82.31 % and 84.56 % respectively. We expect researchers and software engineering practitioners increase their productivity and decrease software time-to-market based on our results. We identify some future work and challenges such as: including Programming eXperience heuristics in the resulting code (PX); allowing round-trip transformations between code, UN-LEND, and pre-conceptual schemas; including rules related to pre-conceptual schema vectors, matrices, and achievement relationships; improving controller and view layers related rules to increase the automation percentage; developing a compiler for UN-LEND models.Los enfoques de desarrollo dirigidos por modelos (MDD) tienen como objetivo aumentar la productividad del equipo de desarrollo y reducir el tiempo de comercialización del software. Algunos autores proponen enfoques MDD para aplicaciones móviles multiplataforma. Por ello, se realiza una revisión sistemática de la literatura en busca de enfoques MDD para aplicaciones móviles multiplataforma, teniendo como resultado 39 estudios primarios agrupados en 19 enfoques MDD diferentes. Se observa que el 100,0 % de los enfoques carecen de lenguajes de modelado cercanos al natural, el 36,8 % de los enfoques carecen de patrones de diseño y el 84,2 % carecen de características de usabilidad. Además, el 42,1 % de los enfoques utilizan lenguajes de programación obsoletos como resultado de la automatización. Por lo tanto, se propone un enfoque MDD para aplicaciones móviles multiplataforma mediante el uso de esquemas preconceptuales. Estos esquemas permiten garantizar un lenguaje de modelado cercano al natural, incluir patrones de diseño e incluir características de usabilidad. Además, se completa el lenguaje de especificación UN-LEND como modelo intermedio entre los esquemas preconceptuales y las aplicaciones móviles multiplataforma, evitando el uso de lenguajes de programación desfasados. Luego, se diseña un metamodelo del esquema preconceptual para desarrollar un prototipo de MDD basado en el Eclipse Modeling Framework y XPAND. Se propone un conjunto de reglas heurísticas divididas en las capas modelo, vista, y controlador, teniendo al esquema preconceptual como el lado izquierdo de la regla, UN-LEND como modelo intermedio, y el código Android Java e iOS Swift como el lado derecho de la regla. Se valida este enfoque utilizando un caso de estudio sobre MobileSQUARE: una aplicación Android para la recopilación de requisitos basada en un modelo de respuesta a preguntas. Como resultado, se genera automáticamente el 90,86 % de la aplicación MobileS-QUARE utilizando las reglas propuestas. En concreto, se observa que la capa model está cerca de la automatización total teniendo un 98,95 % como porcentaje de automatización en comparación con las capas vista y controlador con un 82,31 % y 84,56 % respectivamente. Se espera que los investigadores y los profesionales de la ingeniería del software aumenten su productividad y reduzcan el tiempo de comercialización del software basándose en estos resultados. Como resultado de esta Tesis de Maestría, se identifican algunas propuestas y retos futuros como: incluir heurísticas de Experiencia del Programador (PX) en el código resultante; permitir las transformaciones de ida y vuelta entre código, UN-LEND y los esquemas preconceptuales; incluir reglas relacionadas con los vectores, la matrices y las relaciones de logro de los esquemas preconceptuales; y desarrollar un compilador para los modelos UN-LEND. (Texto tomado de la fuente)MaestríaMagister en Ingeniería - Ingeniería de SistemasSoftware engineeringÁrea Curricular de Ingeniería de Sistemas e Informátic

    Selecting Keyword Search Terms in Computer Forensics Examinations Using Domain Analysis and Modeling

    Get PDF
    The motivation for computer forensics research includes the increase in crimes that involve the use of computers, the increasing capacity of digital storage media, a shortage of trained computer forensics technicians, and a lack of computer forensics standard practices. The hypothesis of this dissertation is that domain modeling of the computer forensics case environment can serve as a methodology for selecting keyword search terms and planning forensics examinations. This methodology can increase the quality of forensics examinations without significantly increasing the combined effort of planning and executing keyword searches. The contributions of this dissertation include: ? A computer forensics examination planning method that utilizes the analytical strengths and knowledge sharing abilities of domain modeling in artificial intelligence and software engineering, ? A computer forensics examination planning method that provides investigators and analysts with a tool for deriving keyword search terms from a case domain model, and ? The design and execution of experiments that illustrate the utility of the case domain modeling method. Three experiment trials were conducted to evaluate the effectiveness of case domain modeling, and each experiment trial used a distinct computer forensics case scenario: an identity theft case, a burglary and money laundering case, and a threatening email case. Analysis of the experiments supports the hypothesis that case domain modeling results in more evidence found during an examination with more effective keyword searching. Additionally, experimental data indicates that case domain modeling is most useful when the evidence disk has a relatively high occurrence of text-based documents and when vivid case background details are available. A pilot study and a case study were also performed to evaluate the utility of case domain modeling for typical law enforcement investigators. In these studies the subjects used case domain models in a computer forensics service solicitation activity. The results of these studies indicate that typical law enforcement officers have a moderate comprehension of the case domain modeling method and that they recognize a moderate amount of utility in the method. Case study subjects also indicated that the method would be more useful if supported by a semi-automated tool

    A Multi-Modal, Modified-Feedback and Self-Paced Brain-Computer Interface (BCI) to Control an Embodied Avatar's Gait

    Full text link
    Brain-computer interfaces (BCI) have been used to control the gait of a virtual self-avatar with the aim of being used in gait rehabilitation. A BCI decodes the brain signals representing a desire to do something and transforms them into a control command for controlling external devices. The feelings described by the participants when they control a self-avatar in an immersive virtual environment (VE) demonstrate that humans can be embodied in the surrogate body of an avatar (ownership illusion). It has recently been shown that inducing the ownership illusion and then manipulating the movements of one’s self-avatar can lead to compensatory motor control strategies. In order to maximize this effect, there is a need for a method that measures and monitors embodiment levels of participants immersed in virtual reality (VR) to induce and maintain a strong ownership illusion. This is particularly true given that reaching a high level of both BCI performance and embodiment are inter-connected. To reach one of them, the second must be reached as well. Some limitations of many existing systems hinder their adoption for neurorehabilitation: 1- some use motor imagery (MI) of movements other than gait; 2- most systems allow the user to take single steps or to walk but do not allow both, which prevents users from progressing from steps to gait; 3- most of them function in a single BCI mode (cue-paced or self-paced), which prevents users from progressing from machine-dependent to machine-independent walking. Overcoming the aforementioned limitations can be done by combining different control modes and options in one single system. However, this would have a negative impact on BCI performance, therefore diminishing its usefulness as a potential rehabilitation tool. In this case, there will be a need to enhance BCI performance. For such purpose, many techniques have been used in the literature, such as providing modified feedback (whereby the presented feedback is not consistent with the user’s MI), sequential training (recalibrating the classifier as more data becomes available). This thesis was developed over 3 studies. The objective in study 1 was to investigate the possibility of measuring the level of embodiment of an immersive self-avatar, during the performing, observing and imagining of gait, using electroencephalogram (EEG) techniques, by presenting visual feedback that conflicts with the desired movement of embodied participants. The objective of study 2 was to develop and validate a BCI to control single steps and forward walking of an immersive virtual reality (VR) self-avatar, using mental imagery of these actions, in cue-paced and self-paced modes. Different performance enhancement strategies were implemented to increase BCI performance. The data of these two studies were then used in study 3 to construct a generic classifier that could eliminate offline calibration for future users and shorten training time. Twenty different healthy participants took part in studies 1 and 2. In study 1, participants wore an EEG cap and motion capture markers, with an avatar displayed in a head-mounted display (HMD) from a first-person perspective (1PP). They were cued to either perform, watch or imagine a single step forward or to initiate walking on a treadmill. For some of the trials, the avatar took a step with the contralateral limb or stopped walking before the participant stopped (modified feedback). In study 2, participants completed a 4-day sequential training to control the gait of an avatar in both BCI modes. In cue-paced mode, they were cued to imagine a single step forward, using their right or left foot, or to walk forward. In the self-paced mode, they were instructed to reach a target using the MI of multiple steps (switch control mode) or maintaining the MI of forward walking (continuous control mode). The avatar moved as a response to two calibrated regularized linear discriminant analysis (RLDA) classifiers that used the μ power spectral density (PSD) over the foot area of the motor cortex as features. The classifiers were retrained after every session. During the training, and for some of the trials, positive modified feedback was presented to half of the participants, where the avatar moved correctly regardless of the participant’s real performance. In both studies, the participants’ subjective experience was analyzed using a questionnaire. Results of study 1 show that subjective levels of embodiment correlate strongly with the power differences of the event-related synchronization (ERS) within the μ frequency band, and over the motor and pre-motor cortices between the modified and regular feedback trials. Results of study 2 show that all participants were able to operate the cued-paced BCI and the selfpaced BCI in both modes. For the cue-paced BCI, the average offline performance (classification rate) on day 1 was 67±6.1% and 86±6.1% on day 3, showing that the recalibration of the classifiers enhanced the offline performance of the BCI (p < 0.01). The average online performance was 85.9±8.4% for the modified feedback group (77-97%) versus 75% for the non-modified feedback group. For self-paced BCI, the average performance was 83% at switch control and 92% at continuous control mode, with a maximum of 12 seconds of control. Modified feedback enhanced BCI performances (p =0.001). Finally, results of study 3 show that the constructed generic models performed as well as models obtained from participant-specific offline data. The results show that there it is possible to design a participant-independent zero-training BCI.Les interfaces cerveau-ordinateur (ICO) ont été utilisées pour contrôler la marche d'un égo-avatar virtuel dans le but d'être utilisées dans la réadaptation de la marche. Une ICO décode les signaux du cerveau représentant un désir de faire produire un mouvement et les transforme en une commande de contrôle pour contrôler des appareils externes. Les sentiments décrits par les participants lorsqu'ils contrôlent un égo-avatar dans un environnement virtuel immersif démontrent que les humains peuvent être incarnés dans un corps d'un avatar (illusion de propriété). Il a été récemment démontré que provoquer l’illusion de propriété puis manipuler les mouvements de l’égo-avatar peut conduire à des stratégies de contrôle moteur compensatoire. Afin de maximiser cet effet, il existe un besoin d'une méthode qui mesure et surveille les niveaux d’incarnation des participants immergés dans la réalité virtuelle (RV) pour induire et maintenir une forte illusion de propriété. D'autre part, atteindre un niveau élevé de performances (taux de classification) ICO et d’incarnation est interconnecté. Pour atteindre l'un d'eux, le second doit également être atteint. Certaines limitations de plusieurs de ces systèmes entravent leur adoption pour la neuroréhabilitation: 1- certains utilisent l'imagerie motrice (IM) des mouvements autres que la marche; 2- la plupart des systèmes permettent à l'utilisateur de faire des pas simples ou de marcher mais pas les deux, ce qui ne permet pas à un utilisateur de passer des pas à la marche; 3- la plupart fonctionnent en un seul mode d’ICO, rythmé (cue-paced) ou auto-rythmé (self-paced). Surmonter les limitations susmentionnées peut être fait en combinant différents modes et options de commande dans un seul système. Cependant, cela aurait un impact négatif sur les performances de l’ICO, diminuant ainsi son utilité en tant qu'outil potentiel de réhabilitation. Dans ce cas, il sera nécessaire d'améliorer les performances des ICO. À cette fin, de nombreuses techniques ont été utilisées dans la littérature, telles que la rétroaction modifiée, le recalibrage du classificateur et l'utilisation d'un classificateur générique. Le projet de cette thèse a été réalisé en 3 études, avec objectif d'étudier dans l'étude 1, la possibilité de mesurer le niveau d'incarnation d'un égo-avatar immersif, lors de l'exécution, de l'observation et de l'imagination de la marche, à l'aide des techniques encéphalogramme (EEG), en présentant une rétroaction visuelle qui entre en conflit avec la commande du contrôle moteur des sujets incarnés. L'objectif de l'étude 2 était de développer un BCI pour contrôler les pas et la marche vers l’avant d'un égo-avatar dans la réalité virtuelle immersive, en utilisant l'imagerie motrice de ces actions, dans des modes rythmés et auto-rythmés. Différentes stratégies d'amélioration des performances ont été mises en œuvre pour augmenter la performance (taux de classification) de l’ICO. Les données de ces deux études ont ensuite été utilisées dans l'étude 3 pour construire des classificateurs génériques qui pourraient éliminer la calibration hors ligne pour les futurs utilisateurs et raccourcir le temps de formation. Vingt participants sains différents ont participé aux études 1 et 2. Dans l'étude 1, les participants portaient un casque EEG et des marqueurs de capture de mouvement, avec un avatar affiché dans un casque de RV du point de vue de la première personne (1PP). Ils ont été invités à performer, à regarder ou à imaginer un seul pas en avant ou la marche vers l’avant (pour quelques secondes) sur le tapis roulant. Pour certains essais, l'avatar a fait un pas avec le membre controlatéral ou a arrêté de marcher avant que le participant ne s'arrête (rétroaction modifiée). Dans l'étude 2, les participants ont participé à un entrainement séquentiel de 4 jours pour contrôler la marche d'un avatar dans les deux modes de l’ICO. En mode rythmé, ils ont imaginé un seul pas en avant, en utilisant leur pied droit ou gauche, ou la marche vers l’avant . En mode auto-rythmé, il leur a été demandé d'atteindre une cible en utilisant l'imagerie motrice (IM) de plusieurs pas (mode de contrôle intermittent) ou en maintenir l'IM de marche vers l’avant (mode de contrôle continu). L'avatar s'est déplacé en réponse à deux classificateurs ‘Regularized Linear Discriminant Analysis’ (RLDA) calibrés qui utilisaient comme caractéristiques la densité spectrale de puissance (Power Spectral Density; PSD) des bandes de fréquences µ (8-12 Hz) sur la zone du pied du cortex moteur. Les classificateurs ont été recalibrés après chaque session. Au cours de l’entrainement et pour certains des essais, une rétroaction modifiée positive a été présentée à la moitié des participants, où l'avatar s'est déplacé correctement quelle que soit la performance réelle du participant. Dans les deux études, l'expérience subjective des participants a été analysée à l'aide d'un questionnaire. Les résultats de l'étude 1 montrent que les niveaux subjectifs d’incarnation sont fortement corrélés à la différence de la puissance de la synchronisation liée à l’événement (Event-Related Synchronization; ERS) sur la bande de fréquence μ et sur le cortex moteur et prémoteur entre les essais de rétroaction modifiés et réguliers. L'étude 2 a montré que tous les participants étaient capables d’utiliser le BCI rythmé et auto-rythmé dans les deux modes. Pour le BCI rythmé, la performance hors ligne moyenne au jour 1 était de 67±6,1% et 86±6,1% au jour 3, ce qui montre que le recalibrage des classificateurs a amélioré la performance hors ligne du BCI (p <0,01). La performance en ligne moyenne était de 85,9±8,4% pour le groupe de rétroaction modifié (77-97%) contre 75% pour le groupe de rétroaction non modifié. Pour le BCI auto-rythmé, la performance moyenne était de 83% en commande de commutateur et de 92% en mode de commande continue, avec un maximum de 12 secondes de commande. Les performances de l’ICO ont été améliorées par la rétroaction modifiée (p = 0,001). Enfin, les résultats de l'étude 3 montrent que pour la classification des initialisations des pas et de la marche, il a été possible de construire des modèles génériques à partir de données hors ligne spécifiques aux participants. Les résultats montrent la possibilité de concevoir une ICO ne nécessitant aucun entraînement spécifique au participant

    Methods for Efficient and Accurate Discovery of Services

    Get PDF
    With an increasing number of services developed and offered in an enterprise setting or the Web, users can hardly verify their requirements manually in order to find appropriate services. In this thesis, we develop a method to discover semantically described services. We exploit comprehensive service and request descriptions such that a wide variety of use cases can be supported. In our discovery method, we compute the matchmaking decision by employing an efficient model checking technique
    corecore