12,280 research outputs found

    Towards Model-Driven Development of Access Control Policies for Web Applications

    Get PDF
    We introduce a UML-based notation for graphically modeling systems’ security aspects in a simple and intuitive way and a model-driven process that transforms graphical specifications of access control policies in XACML. These XACML policies are then translated in FACPL, a policy language with a formal semantics, and the resulting policies are evaluated by means of a Java-based software tool

    A semantic-based platform for the digital analysis of architectural heritage

    Get PDF
    This essay focuses on the fields of architectural documentation and digital representation. We present a research paper concerning the development of an information system at the scale of architecture, taking into account the relationships that can be established between the representation of buildings (shape, dimension, state of conservation, hypothetical restitution) and heterogeneous information about various fields (such as the technical, the documentary or still the historical one). The proposed approach aims to organize multiple representations (and associated information) around a semantic description model with the goal of defining a system for the multi-field analysis of buildings

    An Agent-Based Spatially Explicit Epidemiological Model in MASON

    Get PDF
    This paper outlines the design and implementation of an agent-based epidemiological simulation system. The system was implemented in the MASON toolkit, a set of Java-based agent-simulation libraries. This epidemiological simulation system is robust and extensible for multiple applications, including classroom demonstrations of many types of epidemics and detailed numerical experimentation on a particular disease. The application has been made available as an applet on the MASON web site, and as source code on the author\'s web site.Epidemiology, Social Networks, Agent-Based Simulation, MASON Toolkit

    Soft set theory based decision support system for mining electronic government dataset

    Get PDF
    Electronic government (e-gov) is applied to support performance and create more efficient and effective public services. Grouping data in soft-set theory can be considered as a decision-making technique for determining the maturity level of e-government use. So far, the uncertainty of the data obtained through the questionnaire has not been maximally used as an appropriate reference for the government in determining the direction of future e-gov development policy. This study presents the maximum attribute relative (MAR) based on soft set theory to classify attribute options. The results show that facilitation conditions (FC) are the highest variable in influencing people to use e-government, followed by performance expectancy (PE) and system quality (SQ). The results provide useful information for decision makers to make policies about their citizens and potentially provide recommendations on how to design and develop e-government systems in improving public services

    Hacia el modelado 3d de tumores cerebrales mediante endoneurosonografĂ­a y redes neuronales

    Get PDF
    Las cirugĂ­as mĂ­nimamente invasivas se han vuelto populares debido a que implican menos riesgos con respecto a las intervenciones tradicionales. En neurocirugĂ­a, las tendencias recientes sugieren el uso conjunto de la endoscopia y el ultrasonido, tĂ©cnica llamada endoneurosonografĂ­a (ENS), para la virtualizaciĂłn 3D de las estructuras del cerebro en tiempo real. La informaciĂłn ENS se puede utilizar para generar modelos 3D de los tumores del cerebro durante la cirugĂ­a. En este trabajo, presentamos una metodologĂ­a para el modelado 3D de tumores cerebrales con ENS y redes neuronales. EspecĂ­ficamente, se estudiĂł el uso de mapas auto-organizados (SOM) y de redes neuronales tipo gas (NGN). En comparaciĂłn con otras tĂ©cnicas, el modelado 3D usando redes neuronales ofrece ventajas debido a que la morfologĂ­a del tumor se codifica directamente sobre los pesos sinĂĄpticos de la red, no requiere ningĂșn conocimiento a priori y la representaciĂłn puede ser desarrollada en dos etapas: entrenamiento fuera de lĂ­nea y adaptaciĂłn en lĂ­nea. Se realizan pruebas experimentales con maniquĂ­es mĂ©dicos de tumores cerebrales. Al final del documento, se presentan los resultados del modelado 3D a partir de una base de datos ENS.Minimally invasive surgeries have become popular because they reduce the typical risks of traditional interventions. In neurosurgery, recent trends suggest the combined use of endoscopy and ultrasound (endoneurosonography or ENS) for 3D virtualization of brain structures in real time. The ENS information can be used to generate 3D models of brain tumors during a surgery. This paper introduces a methodology for 3D modeling of brain tumors using ENS and unsupervised neural networks. The use of self-organizing maps (SOM) and neural gas networks (NGN) is particularly studied. Compared to other techniques, 3D modeling using neural networks offers advantages, since tumor morphology is directly encoded in synaptic weights of the network, no a priori knowledge is required, and the representation can be developed in two stages: off-line training and on-line adaptation. Experimental tests were performed using virtualized phantom brain tumors. At the end of the paper, the results of 3D modeling from an ENS database are presented

    Interpretation at the controller's edge: designing graphical user interfaces for the digital publication of the excavations at Gabii (Italy)

    Get PDF
    This paper discusses the authors’ approach to designing an interface for the Gabii Project’s digital volumes that attempts to fuse elements of traditional synthetic publications and site reports with rich digital datasets. Archaeology, and classical archaeology in particular, has long engaged with questions of the formation and lived experience of towns and cities. Such studies might draw on evidence of local topography, the arrangement of the built environment, and the placement of architectural details, monuments and inscriptions (e.g. Johnson and Millett 2012). Fundamental to the continued development of these studies is the growing body of evidence emerging from new excavations. Digital techniques for recording evidence “on the ground,” notably SFM (structure from motion aka close range photogrammetry) for the creation of detailed 3D models and for scene-level modeling in 3D have advanced rapidly in recent years. These parallel developments have opened the door for approaches to the study of the creation and experience of urban space driven by a combination of scene-level reconstruction models (van Roode et al. 2012, Paliou et al. 2011, Paliou 2013) explicitly combined with detailed SFM or scanning based 3D models representing stratigraphic evidence. It is essential to understand the subtle but crucial impact of the design of the user interface on the interpretation of these models. In this paper we focus on the impact of design choices for the user interface, and make connections between design choices and the broader discourse in archaeological theory surrounding the practice of the creation and consumption of archaeological knowledge. As a case in point we take the prototype interface being developed within the Gabii Project for the publication of the Tincu House. In discussing our own evolving practices in engagement with the archaeological record created at Gabii, we highlight some of the challenges of undertaking theoretically-situated user interface design, and their implications for the publication and study of archaeological materials

    On the decomposition of tabular knowledge systems.

    Get PDF
    Recently there has been a growing interest in the decomposition of knowledge based systems and decision tables. Much work in this area has adopted an informal approach. In this paper, we first formalize the notion of decomposition, and then we study some interesting classes of decompositions. The proposed classification can be used to formulate design goals to master the decomposition of large decision tables into smaller components. Importantly, carrying out a decomposition eliminates redundant information from the knowledge base, thereby taking away -right from the beginning- a possible source of inconsistency. This, in turn, renders subsequent verification and validation more smoothly.Knowledge; Systems;
    • 

    corecore