43,918 research outputs found

    On Regulatory and Organizational Constraints in Visualization Design and Evaluation

    Full text link
    Problem-based visualization research provides explicit guidance toward identifying and designing for the needs of users, but absent is more concrete guidance toward factors external to a user's needs that also have implications for visualization design and evaluation. This lack of more explicit guidance can leave visualization researchers and practitioners vulnerable to unforeseen constraints beyond the user's needs that can affect the validity of evaluations, or even lead to the premature termination of a project. Here we explore two types of external constraints in depth, regulatory and organizational constraints, and describe how these constraints impact visualization design and evaluation. By borrowing from techniques in software development, project management, and visualization research we recommend strategies for identifying, mitigating, and evaluating these external constraints through a design study methodology. Finally, we present an application of those recommendations in a healthcare case study. We argue that by explicitly incorporating external constraints into visualization design and evaluation, researchers and practitioners can improve the utility and validity of their visualization solution and improve the likelihood of successful collaborations with industries where external constraints are more present.Comment: 9 pages, 2 figures, presented at BELIV workshop associated with IEEE VIS 201

    An Ontology-Based Method for Semantic Integration of Business Components

    Full text link
    Building new business information systems from reusable components is today an approach widely adopted and used. Using this approach in analysis and design phases presents a great interest and requires the use of a particular class of components called Business Components (BC). Business Components are today developed by several manufacturers and are available in many repositories. However, reusing and integrating them in a new Information System requires detection and resolution of semantic conflicts. Moreover, most of integration and semantic conflict resolution systems rely on ontology alignment methods based on domain ontology. This work is positioned at the intersection of two research areas: Integration of reusable Business Components and alignment of ontologies for semantic conflict resolution. Our contribution concerns both the proposal of a BC integration solution based on ontologies alignment and a method for enriching the domain ontology used as a support for alignment.Comment: IEEE New Technologies of Distributed Systems (NOTERE), 2011 11th Annual International Conference; ISSN: 2162-1896 Print ISBN: 978-1-4577-0729-2 INSPEC Accession Number: 12122775 201

    Integrating Existing Software Toolkits into VO System

    Full text link
    Virtual Observatory (VO) is a collection of interoperating data archives and software tools. Taking advantages of the latest information technologies, it aims to provide a data-intensively online research environment for astronomers all around the world. A large number of high-qualified astronomical software packages and libraries are powerful and easy of use, and have been widely used by astronomers for many years. Integrating those toolkits into the VO system is a necessary and important task for the VO developers. VO architecture greatly depends on Grid and Web services, consequently the general VO integration route is "Java Ready - Grid Ready - VO Ready". In the paper, we discuss the importance of VO integration for existing toolkits and discuss the possible solutions. We introduce two efforts in the field from China-VO project, "gImageMagick" and " Galactic abundance gradients statistical research under grid environment". We also discuss what additional work should be done to convert Grid service to VO service.Comment: 9 pages, 3 figures, will be published in SPIE 2004 conference proceeding

    Didactic Networks and exemplification

    Get PDF
    After a general overview in a previous paper [AMJ10b], in which we proposed Didactic Networks (DN) as a new way for developing and exploiting web-learning content, we offer here a deeper study showing how to use them for web-learning design and content generation based on Instructional Theory with the coherence guaranty of the RST [MT99]. By using a set of expressivity patterns, it is possible to obtain different final ¿products¿ from the DNs such as different level or different aspect web-learning lessons, depending on the target, documents or evaluation tests. In parallel we are defining the Fundamental Cognitive Networks (FCN), in which we deal with the most common patterns human being uses to think and communicate ideas. This FCN set reuses the representation of Concepts, Procedures and Principles defined here, and it is the main topic of a paper we are working on for the very near future

    Toxic comment classification using convolutional and recurrent neural networks

    Get PDF
    This thesis aims to provide a reasonable solution for categorizing automatically sentences into types of toxicity using different types of neural networks. There are six types of categories: Toxic, severe toxic, obscene, threat, insult and identity hate. Three different implementations have been studied to accomplish the objective: LSTM (Long Short-Term Memory), GRU (Gated Recurrent Unit) and convolutional neural networks. The thesis is not thought to aim on improving the performance of every individual model but on the comparison between them in terms of natural language processing adequacy. In addition, one differential aspect about this project is the research of LSTM neurons activations and thus the relationship of the words with the final sentence classificatory decision. In conclusion, the three models performed almost equally and the extraction of LSTM activations provided a very accurate and visual understanding of the decisions taken by the network.Esta tesis tiene como objetivo aportar una buena solución para la categorización automática de comentarios abusivos haciendo uso de distintos tipos de redes neuronales. Hay seis categorías: Tóxico, muy tóxico, obsceno, insulto, amenaza y racismo. Se ha hecho una investigación de tres implementaciones para llevar a cabo el objetivo: LSTM (Long Short-Term Memory), GRU (Gated Recurrent Unit) y redes convolucionales. El objetivo de este trabajo no es intentar mejorar al máximo el resultado de la clasificación sino hacer una comparación de los 3 modelos para los mismos parámetros e intentar saber cuál funciona mejor para este caso de procesado de lenguaje. Además, un aspecto diferencial de este proyecto es la investigación sobre las activaciones de las neuronas en el modelo LSTM y su relación con la importancia de las palabras respecto a la clasificación final de la frase. En conclusión, los tres modelos han funcionado de forma casi idéntica y la extracción de las activaciones han proporcionado un conocimiento muy preciso y visual de las decisiones tomadas por la red.Aquesta tesi té com a objectiu aportar una bona solució per categoritzar automàticament comentaris abusius usant diferents tipus de xarxes neuronals. Hi ha sis tipus de categories: Tòxic, molt tòxic, obscè, insult, amenaça i racisme. S'ha fet una recerca de tres implementacions per dur a terme l'objectiu: LSTM (Long Short-Term Memory), GRU (Gated Recurrent Unit) i xarxes convolucionals. L'objectiu d'aquest treball no és intentar millorar al màxim els resultats de classificació sinó fer una comparació dels 3 models pels mateixos paràmetres per tal d'esbrinar quin funciona millor en aquest cas de processat de llenguatge. A més, un aspecte diferencial d'aquest projecte és la recerca sobre les activacions de les neurones al model LSTM i la seva relació amb la importància de les paraules respecte la classificació final de la frase. En conclusió, els tres models han funcionat gairebé idènticament i l'extracció de les activacions van proporcionar un enteniment molt acurat i visual de les decisions preses per la xarxa

    The design of an indirect method for the human presence monitoring in the intelligent building

    Get PDF
    This article describes the design and verification of the indirect method of predicting the course of CO2 concentration (ppm) from the measured temperature variables Tindoor (degrees C) and the relative humidity rH(indoor) (%) and the temperature T-outdoor (degrees C) using the Artificial Neural Network (ANN) with the Bayesian Regulation Method (BRM) for monitoring the presence of people in the individual premises in the Intelligent Administrative Building (IAB) using the PI System SW Tool (PI-Plant Information enterprise information system). The CA (Correlation Analysis), the MSE (Root Mean Squared Error) and the DTW (Dynamic Time Warping) criteria were used to verify and classify the results obtained. Within the proposed method, the LMS adaptive filter algorithm was used to remove the noise of the resulting predicted course. In order to verify the method, two long-term experiments were performed, specifically from February 1 to February 28, 2015, from June 1 to June 28, 2015 and from February 8 to February 14, 2015. For the best results of the trained ANN BRM within the prediction of CO2, the correlation coefficient R for the proposed method was up to 92%. The verification of the proposed method confirmed the possibility to use the presence of people of the monitored IAB premises for monitoring. The designed indirect method of CO2 prediction has potential for reducing the investment and operating costs of the IAB in relation to the reduction of the number of implemented sensors in the IAB within the process of management of operational and technical functions in the IAB. The article also describes the design and implementation of the FEIVISUAL visualization application for mobile devices, which monitors the technological processes in the IAB. This application is optimized for Android devices and is platform independent. The application requires implementation of an application server that communicates with the data server and the application developed. The data of the application developed is obtained from the data storage of the PI System via a PI Web REST API (Application Programming Integration) client.Web of Science8art. no. 2
    corecore