112 research outputs found

    Web-based technology for storage and processing of multi-component data in seismology

    Get PDF
    Seismic databases and processing tools currently available are mainly limited to classic three-component seismic recordings and cannot handle collocated multi-component, multi-disciplinary datasets easily. Further, these seismological databases depend on event-related data and are not able to manage state of the art continuous waveform data input as well. None of them allows for automated request of data available at seismic data centers or to share specific data to users outside one institute. Some seismic databases even depend on licensed database engines, which contradicts the open source character of most software packages used in seismology. This study intends to provide a suitable answer to the deficiencies of existing seismic databases. SeisHub is a novel web-based database approach created for archiving, processing, and sharing geophysical data and meta data (data describing data), particularly adapted for seismic data. The implemented database prototype offers the full functionality of a native XML database combined with the versatility of a RESTful Web service. The XML database itself uses a standard relational database as back-end, which is currently tested with PostgreSQL (http://www.postgres.org) and SQLite (http://www.sqlite.org). This sophisticated structure allows for the usage of both worlds: on the one hand the power of the SQL for querying and manipulating data, and one the other hand the freedom to use any standard connected to XML, e.g. document conversion via XSLT (Extensible Stylesheet Language Transformations) or resource validation via XSD (XML Schema). The actual resources and any additional services are available via fixed Uniform Resource Identifiers (URIs), where as the database back-end stores the original XML documents and all related indexed values. Indexes are generated using the XPath language and may be added at any time during runtime. This flexibility of the XML/SQL mixture introduced above enables the user to include parameters or results as well as meta data from additional or yet unknown monitoring techniques at any time. SeisHub also comprises features of a “classical seismic database” providing direct access to continuous seismic waveform data and associated meta data. Additionally, SeisHub offers various access protocols (HTTP/HTTPS, SFTP, SSH), an extensible plug-in system, user management, and a sophisticated web-based administration front-end. The SeisHub database is an open source project and the latest development release can be downloaded via the project home page http://www.seishub.org. The SeisHub database has already been deployed as central database component within two scientific projects: Exupéry (http://www.exupery-vfrs.de), a mobile Volcano Fast Response System (VFRS), and BayernNetz, the seismological network of the Bavarian Seismological Service (Erdbebendienst Bayern; http://www.erdbeben-in-bayern.de)

    Framework for ubiquitous and voice enabled web applicattions development.

    Get PDF
    RESUMEN La cantidad de dispositivos con capacidad de conexión a Internet crece rápidamente. En la actualidad se dispone de teléfonos móviles basados en tecnología WAP (Wireless Application Protocol) o I-Mode, Agendas Digitales Personales, Kioskos Internet, teléfonos convencionales basados en acceso a la Web por medio de la voz, dispositivos basados en televisión interactiva, electrodomésticos, entre otros. Desarrollar una versión de una aplicación web para cada uno de los dispositivos con conectividad a la Web resulta inviable. Por otra parte, desarrollar aplicaciones web que puedan ser visualizados en forma apropiada y aprovechando al máximo las capacidades del dispositivo se constituye en una tarea compleja. En esta tesis se propone un framework, entendido como un marco de trabajo genérico, que sirva como guía para el desarrollo de portales web pervasivos que puedan ser accedidos desde múltiples dispositivos, evitando el desarrollo de un portal por cada uno y teniendo en cuenta las grandes variaciones pueden existir en sus capacidades. Adicionalmente se ha planteado un modelo de agrupamiento de dispositivos, que permita definir una serie de grupos, así como las características asociadas a los mismos, en forma tal que puedan generarse posteriormente los formatos asociados a estos grupos de dispositivos y no a elementos individuales y se ha propuesto y validado una arquitectura de referencia para el desarrollo de aplicaciones pervasivas, que no genere dependencia de tecnologías de servidor, y que permita incorporar la solución de agrupamiento planteada previamente. ____________________________________________________________________________________________________The purpose of the Ubiquitous or Pervasive Computing - an emergent paradigm of personalized computation- is to obtain device interoperability under different conditions. The devices were designed for different purposes by different companies or from different technological generations. The ever increasing market of web enabled devices has brought up diverse challenges related to the difficulty of visualizing content in a unified form to diverse clients, while at the same time taking into account the great differences in the capacities of these devices. It is not feasible to develop a separate application for each of these devices, simply because the number of different devices is too high and still growing. In the analysis of existing proposals dealing with the modelling of ubiquitous web applications, the link that exists between the logical and conceptual modelling and the physical modelling of the applications is not clear enough, and the way in which the context aspects related to web access from these devices cannot be specified. On the other hand, the available commercial products are supplier-specific. Every future platform change would a costly and painstaking process In this thesis we present a proposal of a framework for the development of web applications that can be accessed from different types of devices, such as PCs, PDAs, mobile phones based on diverse technologies (like WAP and I-Mode) and conventional telephones that access the web through voice gateways and voice portals. The proposed framework serves as a guide for the development of this type of applications and it can be deployed to different server configurations and software development technologies. In order to obtain this objective, a description of diverse theoretical elements related to dynamic generation of information that can be acceded by devices has been made, as well as a description of involved technologies whose hardware, software and connectivity characteristics vary remarkably. The theoretical study was carried out in parallel with tests based on the different technologies used. A multilingual-ubiquitous traffic information portal was used to test the theory in an operational environment

    Model based test suite minimization using metaheuristics

    Get PDF
    Software testing is one of the most widely used methods for quality assurance and fault detection purposes. However, it is one of the most expensive, tedious and time consuming activities in software development life cycle. Code-based and specification-based testing has been going on for almost four decades. Model-based testing (MBT) is a relatively new approach to software testing where the software models as opposed to other artifacts (i.e. source code) are used as primary source of test cases. Models are simplified representation of a software system and are cheaper to execute than the original or deployed system. The main objective of the research presented in this thesis is the development of a framework for improving the efficiency and effectiveness of test suites generated from UML models. It focuses on three activities: transformation of Activity Diagram (AD) model into Colored Petri Net (CPN) model, generation and evaluation of AD based test suite and optimization of AD based test suite. Unified Modeling Language (UML) is a de facto standard for software system analysis and design. UML models can be categorized into structural and behavioral models. AD is a behavioral type of UML model and since major revision in UML version 2.x it has a new Petri Nets like semantics. It has wide application scope including embedded, workflow and web-service systems. For this reason this thesis concentrates on AD models. Informal semantics of UML generally and AD specially is a major challenge in the development of UML based verification and validation tools. One solution to this challenge is transforming a UML model into an executable formal model. In the thesis, a three step transformation methodology is proposed for resolving ambiguities in an AD model and then transforming it into a CPN representation which is a well known formal language with extensive tool support. Test case generation is one of the most critical and labor intensive activities in testing processes. The flow oriented semantic of AD suits modeling both sequential and concurrent systems. The thesis presented a novel technique to generate test cases from AD using a stochastic algorithm. In order to determine if the generated test suite is adequate, two test suite adequacy analysis techniques based on structural coverage and mutation have been proposed. In terms of structural coverage, two separate coverage criteria are also proposed to evaluate the adequacy of the test suite from both perspectives, sequential and concurrent. Mutation analysis is a fault-based technique to determine if the test suite is adequate for detecting particular types of faults. Four categories of mutation operators are defined to seed specific faults into the mutant model. Another focus of thesis is to improve the test suite efficiency without compromising its effectiveness. One way of achieving this is identifying and removing the redundant test cases. It has been shown that the test suite minimization by removing redundant test cases is a combinatorial optimization problem. An evolutionary computation based test suite minimization technique is developed to address the test suite minimization problem and its performance is empirically compared with other well known heuristic algorithms. Additionally, statistical analysis is performed to characterize the fitness landscape of test suite minimization problems. The proposed test suite minimization solution is extended to include multi-objective minimization. As the redundancy is contextual, different criteria and their combination can significantly change the solution test suite. Therefore, the last part of the thesis describes an investigation into multi-objective test suite minimization and optimization algorithms. The proposed framework is demonstrated and evaluated using prototype tools and case study models. Empirical results have shown that the techniques developed within the framework are effective in model based test suite generation and optimizatio

    Standards and Tools for Model Exchange and Analysis in Systems Biology

    Get PDF
    This work is about standards in systems biology and about support for these standards in systems biology software tools. The work is divided into three sections. The first section describes an extension to the systems biology markup language (SBML) for the storage of graphical information on biochemical reaction networks. In the first part it explains the history of the extension, what it is about and what it can be used for and in the second part it details implementations of the extension in the form of several different software tools. The second section of this work deals with the Systems Biology Markup Language (SBML) standard and the different aspects of its support in the COPASI software tool. COPASI is a tool for the simulation and analysis of biochemical reaction networks and it uses SBML files as an exchange format for these reaction network models. This section highlights the different aspects of the implementation of the SBML standard as well as the extension to the SBML standard that is described in the first section of this work. Additionally this section describes how the functionality of COPASI, which is written in the C++ programming language, has been made available to developers using other programming languages like Java or Python and how this functionality is used in different systems biology computer programs around the world. The third section of this thesis discusses methods for the normalization and comparison of mathematical expressions and the implementation of these methods in the form of a computer program. This program is used to analyze the mathematical expressions in all models of the current release of the BioModels database. At several occasions in this text, it is demonstrated how the methods and tools described in these three sections can make a valuable contribution to research in systems biology

    The hybrid model, and adaptive educational hypermedia frameworks

    Get PDF
    The amount of information on the web is characterised by being enormous, as is the number of users with different goals and interests. User models have been utilized by adaptive hypermedia systems generally and adaptive educational hypermedia systems (AEHS) particularly to personalize the amount of information they have with respect to each individual's knowledge, background and goals. As a result of the research described herein, a user model called the Hybrid Model has been developed. This model is both generic and abstract, and it extends other models used by AEHS by measuring users' knowledge levels with respect to different knowledge domains simultaneously by utilising well known techniques in the world of user modelling, specifically the Overlay model (which has been modified) and the Stereotype model. Therefore, using the Hybrid Model, AEHS will not be restricted to a single knowledge domain at anyone time. Thus, by implementing the Hybrid model, those systems can manage users' knowledge globally with respect to the deployed knowledge domains. The model has been implemented experimentally in an educational hypermedia system called WHURLE (Web-based Hierarchal Universal Reactive Learning Environment) to verify its aim - managing users' knowledge globally. Moreover, this implementation has been tested successfully through a user trial as an adaptive revision guide for a Biological Anthropology Course. Furthermore, the infrastructure of the WHURLE system has been modified to embrace the objective of the Hybrid Model. This has led to a novel design that provides the system with the capability of utilising different user models easily without affecting any of its component modules

    The 4th Conference of PhD Students in Computer Science

    Get PDF

    Digital plan lodgement and dissemination

    Full text link
    In Australia, in recent years there has been increasing demand for more streamlined lodgement of cadastral plans and for their later dissemination. There are a number of approaches to meeting this demand, one of which is developed in detail in this dissertation. The current status of digital lodgement and Digital Cadastral Databases (DCDB) throughout Australia and New Zealand is reviewed. Each of the states and territories in Australia and also New Zealand are examined, looking at the process involved in the lodgement of survey plans and the state of the DCDB in each jurisdiction. From this examination the key issues in digital lodgement and dissemination are extracted and a needs analysis for an Australia-wide generic system is carried out. This needs analysis is directed at technological change allied with sound cadastral principles. Extensible Markup Language (XML) is considered for the storage and transport of all the required data and to facilitate the dissemination of information over the Internet. The benefits of using XML are comprehensive, leading to its selection and the use of related technologies LandXML, Extensible Structured Query Language (XSQL) and Extensible Stylesheet Language (XSL). Vector graphics are introduced as the means to display plans and maps on the Internet. A number of vector standards and Web mapping solutions are compared to determine the most suitable for this project. A new standard developed by the World Wide Web Consortium (W3C), Scalable Vector Graphics (SVG), is chosen. A prototype Web interface and the underlying database and Web server were developed using Oracle as the database and Apache as the Web server. Each aspect of the development is described, starting with the installation and configuration of the database, the Web server and the XSQL servlet. Testing was undertaken using LandXML cadastral data and displaying plans using SVG. Both Internet Explorer and Mozilla were trialled as the Web browser, with Mozilla being chosen because of incompatibilities between Internet Explorer, LandXML and SVG. An operational pilot was created. At this stage it requires manual intervention to centre and maximise a plan in the display area. The result indicates that an automated system is feasible and this dissertation provides a basis for further development by Australian land administration organisations

    From Model Specification to Simulation of Biologically Constrained Networks of Spiking Neurons.

    Get PDF
    A declarative extensible markup language (SpineML) for describing the dynamics, network and experiments of large-scale spiking neural network simulations is described which builds upon the NineML standard. It utilises a level of abstraction which targets point neuron representation but addresses the limitations of existing tools by allowing arbitrary dynamics to be expressed. The use of XML promotes model sharing, is human readable and allows collaborative working. The syntax uses a high-level self explanatory format which allows straight forward code generation or translation of a model description to a native simulator format. This paper demonstrates the use of code generation in order to translate, simulate and reproduce the results of a benchmark model across a range of simulators. The flexibility of the SpineML syntax is highlighted by reproducing a pre-existing, biologically constrained model of a neural microcircuit (the striatum). The SpineML code is open source and is available at http://bimpa.group.shef.ac.uk/SpineML

    ImMApp: An immersive database of sound art

    Full text link
    The ImMApp (Immersive Mapping Application) thesis addresses contemporary and historical sound art from a position informed by, on one hand, post-structural critical theory, and on the other, a practice-based exploration of contemporary digital technologies (MySQL, XML, XSLT, X3D). It proposes a critical ontological schema derived from Michel Foucault's Archaeology of Knowledge (1972) and applies this to pre-existing information resources dealing with sound art. Firstly an analysis of print-based discourses (Sound by Artists. Lander and Lexier (1990), Noise, Water, Meat. Kahn (2001) and Background Noise: Perspectives on Sound Art. LaBelle (2006» is carried out according to Foucauldian notions of genealogy, subject positions, the statement, institutional affordances and the productive nature of discursive formation. The discursive field (the archive) presented by these major canonical texts is then contrasted with a formulation derived from Giles Deleuze and Felix Guattari: that of a 'minor' history of sound art practices. This is then extended by media theory (McLuhan, Kittler, Manovich) into a critique of two digital sound art resources (The Australian Sound Design Project (Bandt and Paine (2005) and soundtoys.net Stanza (1998). The divergences between the two forms of information technologies (print vs. digital) are discussed. The means by which such digitised methodologies may enhance Foucauldian discourse analysis points onwards towards the two practice-based elements of the thesis. Surface, the first iterative part, is a web-browser based database built on an Apache/MySQIlXML architecture. It is the most extensive mapping of sound art undertaken to date and extends the theoretical framework discussed above into the digital domain. Immersion, the second part, is a re-presentation of this material in an immersive digital environment, following the transformation of the source material via XSL-T into X3D. Immersion is a real-time, large format video, surround sound (5.ln.l) installation and the thesis concludes with a discussion of how this outcome has articulated Foucauldian archaeological method and unframed pre-existing notions of the nature of sound art
    corecore