52,122 research outputs found

    Modular System for Shelves and Coasts (MOSSCO v1.0) - a flexible and multi-component framework for coupled coastal ocean ecosystem modelling

    Full text link
    Shelf and coastal sea processes extend from the atmosphere through the water column and into the sea bed. These processes are driven by physical, chemical, and biological interactions at local scales, and they are influenced by transport and cross strong spatial gradients. The linkages between domains and many different processes are not adequately described in current model systems. Their limited integration level in part reflects lacking modularity and flexibility; this shortcoming hinders the exchange of data and model components and has historically imposed supremacy of specific physical driver models. We here present the Modular System for Shelves and Coasts (MOSSCO, http://www.mossco.de), a novel domain and process coupling system tailored---but not limited--- to the coupling challenges of and applications in the coastal ocean. MOSSCO builds on the existing coupling technology Earth System Modeling Framework and on the Framework for Aquatic Biogeochemical Models, thereby creating a unique level of modularity in both domain and process coupling; the new framework adds rich metadata, flexible scheduling, configurations that allow several tens of models to be coupled, and tested setups for coastal coupled applications. That way, MOSSCO addresses the technology needs of a growing marine coastal Earth System community that encompasses very different disciplines, numerical tools, and research questions.Comment: 30 pages, 6 figures, submitted to Geoscientific Model Development Discussion

    Interpretation at the controller's edge: designing graphical user interfaces for the digital publication of the excavations at Gabii (Italy)

    Get PDF
    This paper discusses the authors’ approach to designing an interface for the Gabii Project’s digital volumes that attempts to fuse elements of traditional synthetic publications and site reports with rich digital datasets. Archaeology, and classical archaeology in particular, has long engaged with questions of the formation and lived experience of towns and cities. Such studies might draw on evidence of local topography, the arrangement of the built environment, and the placement of architectural details, monuments and inscriptions (e.g. Johnson and Millett 2012). Fundamental to the continued development of these studies is the growing body of evidence emerging from new excavations. Digital techniques for recording evidence “on the ground,” notably SFM (structure from motion aka close range photogrammetry) for the creation of detailed 3D models and for scene-level modeling in 3D have advanced rapidly in recent years. These parallel developments have opened the door for approaches to the study of the creation and experience of urban space driven by a combination of scene-level reconstruction models (van Roode et al. 2012, Paliou et al. 2011, Paliou 2013) explicitly combined with detailed SFM or scanning based 3D models representing stratigraphic evidence. It is essential to understand the subtle but crucial impact of the design of the user interface on the interpretation of these models. In this paper we focus on the impact of design choices for the user interface, and make connections between design choices and the broader discourse in archaeological theory surrounding the practice of the creation and consumption of archaeological knowledge. As a case in point we take the prototype interface being developed within the Gabii Project for the publication of the Tincu House. In discussing our own evolving practices in engagement with the archaeological record created at Gabii, we highlight some of the challenges of undertaking theoretically-situated user interface design, and their implications for the publication and study of archaeological materials

    Mapping Big Data into Knowledge Space with Cognitive Cyber-Infrastructure

    Full text link
    Big data research has attracted great attention in science, technology, industry and society. It is developing with the evolving scientific paradigm, the fourth industrial revolution, and the transformational innovation of technologies. However, its nature and fundamental challenge have not been recognized, and its own methodology has not been formed. This paper explores and answers the following questions: What is big data? What are the basic methods for representing, managing and analyzing big data? What is the relationship between big data and knowledge? Can we find a mapping from big data into knowledge space? What kind of infrastructure is required to support not only big data management and analysis but also knowledge discovery, sharing and management? What is the relationship between big data and science paradigm? What is the nature and fundamental challenge of big data computing? A multi-dimensional perspective is presented toward a methodology of big data computing.Comment: 59 page

    Software Infrastructure for Natural Language Processing

    Full text link
    We classify and review current approaches to software infrastructure for research, development and delivery of NLP systems. The task is motivated by a discussion of current trends in the field of NLP and Language Engineering. We describe a system called GATE (a General Architecture for Text Engineering) that provides a software infrastructure on top of which heterogeneous NLP processing modules may be evaluated and refined individually, or may be combined into larger application systems. GATE aims to support both researchers and developers working on component technologies (e.g. parsing, tagging, morphological analysis) and those working on developing end-user applications (e.g. information extraction, text summarisation, document generation, machine translation, and second language learning). GATE promotes reuse of component technology, permits specialisation and collaboration in large-scale projects, and allows for the comparison and evaluation of alternative technologies. The first release of GATE is now available - see http://www.dcs.shef.ac.uk/research/groups/nlp/gate/Comment: LaTeX, uses aclap.sty, 8 page

    Fourteenth Biennial Status Report: MĂ€rz 2017 - February 2019

    No full text

    Integration of an adaptive infotainment system in a vehicle and validation in real driving scenarios

    Get PDF
    More services, functionalities, and interfaces are increasingly being incorporated into current vehicles and may overload the driver capacity to perform primary driving tasks adequately. For this reason, a strategy for easing driver interaction with the infotainment system must be defined, and a good balance between road safety and driver experience must also be achieved. An adaptive Human Machine Interface (HMI) that manages the presentation of information and restricts drivers’ interaction in accordance with the driving complexity was designed and evaluated. For this purpose, the driving complexity value employed as a reference was computed by a predictive model, and the adaptive interface was designed following a set of proposed HMI principles. The system was validated performing acceptance and usability tests in real driving scenarios. Results showed the system performs well in real driving scenarios. Also, positive feedbacks were received from participants endorsing the benefits of integrating this kind of system as regards driving experience and road safety.Postprint (published version

    ImageJ2: ImageJ for the next generation of scientific image data

    Full text link
    ImageJ is an image analysis program extensively used in the biological sciences and beyond. Due to its ease of use, recordable macro language, and extensible plug-in architecture, ImageJ enjoys contributions from non-programmers, amateur programmers, and professional developers alike. Enabling such a diversity of contributors has resulted in a large community that spans the biological and physical sciences. However, a rapidly growing user base, diverging plugin suites, and technical limitations have revealed a clear need for a concerted software engineering effort to support emerging imaging paradigms, to ensure the software's ability to handle the requirements of modern science. Due to these new and emerging challenges in scientific imaging, ImageJ is at a critical development crossroads. We present ImageJ2, a total redesign of ImageJ offering a host of new functionality. It separates concerns, fully decoupling the data model from the user interface. It emphasizes integration with external applications to maximize interoperability. Its robust new plugin framework allows everything from image formats, to scripting languages, to visualization to be extended by the community. The redesigned data model supports arbitrarily large, N-dimensional datasets, which are increasingly common in modern image acquisition. Despite the scope of these changes, backwards compatibility is maintained such that this new functionality can be seamlessly integrated with the classic ImageJ interface, allowing users and developers to migrate to these new methods at their own pace. ImageJ2 provides a framework engineered for flexibility, intended to support these requirements as well as accommodate future needs

    The apparatus of digital archaeology

    Get PDF
    Digital Archaeology is predicated upon an ever-changing set of apparatuses – technological, methodological, software, hardware, material, immaterial – which in their own ways and to varying degrees shape the nature of Digital Archaeology. Our attention, however, is perhaps inevitably more closely focussed on research questions, choice of data, and the kinds of analyses and outputs. In the process we tend to overlook the effects the tools themselves have on the archaeology we do beyond the immediate consequences of the digital. This paper introduces cognitive artefacts as a means of addressing the apparatus more directly within the context of the developing archaeological digital ecosystem. It argues that a critical appreciation of our computational cognitive artefacts is key to understanding their effects on both our own cognition and on the creation of archaeological knowledge. In the process, it defines a form of cognitive digital archaeology in terms of four distinct methods for extracting cognition from the digital apparatus layer by layer
    • 

    corecore