1,345 research outputs found

    Project Hydra: Designing & Building a Reusable Framework for Multipurpose, Multifunction, Multi-institutional Repository-Powered Solutions

    Get PDF
    4th International Conference on Open RepositoriesThis presentation was part of the session : Fedora User Group PresentationsDate: 2009-05-20 03:30 PM – 05:00 PMThere is a clear business need in higher education for a flexible, reusable application framework that can support the rapid development of multiple systems tailored to distinct needs, but powered by a common underlying repository. Recognizing this common need, Stanford University, the University of Hull and the University of Virginia are collaborating on "Project Hydra", a three-year effort to create an application and middleware framework that, in combination with an underlying Fedora repository, will create a reusable environment for running multifunction, multipurpose repository-powered solutions. This paper details the collaborators' functional and technical design for such a framework, and will demonstrate the progress made to date on the initiative.JIS

    Workflows for Quantitative Data Analysis in The Social Sciences

    Get PDF
    The background is given to how statistical analysis is used by quantitative social scientists. Developing statistical analyses requires substantial effort, yet there are important limitations in current practice. This has motivated the authors to create a more systematic and effective methodology with supporting tools. The approach to modelling quantitative data analysis in the social sciences is presented. Analysis scripts are treated abstractly as mathematical functions and concretely as web services. This allows individual scripts to be combined into high-level workflows. A comprehensive set of tools allows workflows to be defined, automatically validated and verified, and automatically implemented. The workflows expose opportunities for parallel execution, can define support for proper fault handling, and can be realised by non-technical users. Services, workflows and datasets can also be readily shared. The approach is illustrated with a realistic case study that analyses occupational position in relation to health

    The cyber-physical e-machine manufacturing system : virtual engineering for complete lifecycle support

    Get PDF
    Electric machines (e-machines) will form a fundamental part of the powertrain of the future. Automotive manufacturers are keen to develop emachine manufacturing and assembly knowledge in-house. An on-going project, which aims to deliver an e-machine pilot assembly line, is being supported by a set of virtual engineering tools developed by the Automation Systems Group at the University of Warwick. Although digital models are a useful design aid providing visualization and simulation, the opportunity being exploited in this research paper is to have a common model throughout the lifecycle of both the manufacturing system and the product. The vision is to have a digital twin that is consistent with the real system and not just used in the early design and deployment phases. This concept, commonly referred to as Cyber Physical Systems (CPS), is key to realizing efficient system reconfigurability to support alternative product volumes and mixes. These tools produce modular digital models that can be rapidly modified preventing the simulation, test, and modification processes forming a bottleneck to the development lifecycles. In addition, they add value at more mature phases when, for example, a high volume line based on the pilot is created as the same models can be reused and modified as required. This research paper therefore demonstrates how the application of the virtual engineering tools support the development of a CPS using an e-machine assembly station as a case study. The main contribution of the work is to further validate the CPS philosophy by extending the concept into practical applications in pilot production systems with prototype products

    Influence of a hybrid digital toolset on the creative behaviors of designers in early-stage design

    Get PDF
    The purpose of this research was to investigate how diversification of the repertoire of digital design techniques affects the creative behaviors of designers in the early design phases. The principal results of practice-based pilot experiments on the subject indicate three key properties of the hybrid digital tooling strategy. The strategy features intelligent human-machine integration, facilitating three different types of synergies between the designer and the digital media: human-dominated, machine-dominated, and a balanced human-machine collaboration. This strategy also boosts the cognitive behaviors of the designer by triggering divergent, transformative and convergent design activities and allowing for work on various abstraction levels. In addition, the strategy stimulates the explorative behaviors of the designer by encouraging the production of and interaction with a wide range of design representations, including physical and digital, dynamic and static objects. Thus, working with a broader range of digital modeling techniques can positively influence the creativity of designers in the early conception stages

    Enabling Flexibility in Process-Aware Information Systems: Challenges, Methods, Technologies

    Get PDF
    In today’s dynamic business world, the success of a company increasingly depends on its ability to react to changes in its environment in a quick and flexible way. Companies have therefore identified process agility as a competitive advantage to address business trends like increasing product and service variability or faster time to market, and to ensure business IT alignment. Along this trend, a new generation of information systems has emerged—so-called process-aware information systems (PAIS), like workflow management systems, case handling tools, and service orchestration engines. With this book, Reichert and Weber address these flexibility needs and provide an overview of PAIS with a strong focus on methods and technologies fostering flexibility for all phases of the process lifecycle (i.e., modeling, configuration, execution and evolution). Their presentation is divided into six parts. Part I starts with an introduction of fundamental PAIS concepts and establishes the context of process flexibility in the light of practical scenarios. Part II focuses on flexibility support for pre-specified processes, the currently predominant paradigm in the field of business process management (BPM). Part III details flexibility support for loosely specified processes, which only partially specify the process model at build-time, while decisions regarding the exact specification of certain model parts are deferred to the run-time. Part IV deals with user- and data-driven processes, which aim at a tight integration of processes and data, and hence enable an increased flexibility compared to traditional PAIS. Part V introduces existing technologies and systems for the realization of a flexible PAIS. Finally, Part VI summarizes the main ideas of this book and gives an outlook on advanced flexibility issues. The attached pdf file gives a preview on Chapter 3 of the book which explains the book's overall structure

    Domain Specific Languages for Managing Feature Models: Advances and Challenges

    Get PDF
    International audienceManaging multiple and complex feature models is a tedious and error-prone activity in software product line engineering. Despite many advances in formal methods and analysis techniques, the supporting tools and APIs are not easily usable together, nor unified. In this paper, we report on the development and evolution of the Familiar Domain-Specific Language (DSL). Its toolset is dedicated to the large scale management of feature models through a good support for separating concerns, composing feature models and scripting manipulations. We overview various applications of Familiar and discuss both advantages and identified drawbacks. We then devise salient challenges to improve such DSL support in the near future

    Data-driven design for Architecture and Environment Integration

    Get PDF
    Rapid urbanization and related land cover and land use changes are primary causes of climate change, and of environmental and ecosystem degradation. Sustainability problems are becoming increasingly complex due to these developments. At the same time vast amounts of data on urbanization, construction and resulting environmental conditions are being generated. Yet it is hardly possible to gain insights for sustainable plan-ning and design at the same rate as data is generated. Moreover, the complexity of compound sustainability problems requires interdisciplinary approaches that address multiple knowledge fields, multiple dynamics and multiple spatial, temporal and functional scales. This raises a question regarding methods and tools available to planners and architects for tackling these complex issues. To address this problem we are developing an interdisciplinary approach, computational framework and related workflows for multi-domain and trans-scalar modelling that integrate planning and design scales. For this article two lines of research were selected. The first focuses on understanding environments for the purpose of discovering, recovering and adapting land knowledge to different conditions and contexts. This entails an analytical data-integrated computational workflow. The second line of research focuses on designing environments and developing an approach and computational workflow for data-integrated planning and design. These two lines converge in a combined analytical and generative data-integrated computational workflow. This combined approach aims for an intense integration of architectures and environments that we call embedded architectures. In this article we discuss the two lines of research, their convergence, and further research questions

    Automated Analysis and Implementation of Composed Grid Services

    Get PDF
    Service composition allows web services to be combined into new ones. Web service composition is increasingly common in mission-critical applications. It has therefore become important to verify the correctness of web service composition using formal methods. The composition of grid services is a similar but new goal. We have previously developed an abstract graphical notation called CRESS for describing composite grid services. We have demonstrated that it is feasible to automatically generate service implementations as well as formal specifications from CRESS descriptions. The automated service implementations use orchestration code in BPEL, along with the service interfaces and data types in WSDL and XSD respectively for all services. CRESS-generated BPEL implementations currently do not useWSRF features such as implicit endpoint references for WS-Resources and interfacing to standard WSRF port types. CRESS-generated formal models use the standardised process algebra LOTOS. Service behaviour is modelled by processes, while service data types are modelled as abstract data types. Simulation and validation of the generated LOTOS specifications can be performed. In this paper, we illustrate how CRESS can be further extended to improve its generation of service compositions, specifically for WSRF services implemented using Globus Toolkit 4. We also show how to facilitate use of the generated LOTOS specifications with the CADP toolbox

    Fine-Grained Workflow Interoperability in Life Sciences

    Get PDF
    In den vergangenen Jahrzehnten führten Fortschritte in den Schlüsseltechnologien der Lebenswissenschaften zu einer exponentiellen Zunahme der zur Verfügung stehenden biologischen Daten. Um Ergebnisse zeitnah generieren zu können werden sowohl spezialisierte Rechensystem als auch Programmierfähigkeiten benötigt: Desktopcomputer oder monolithische Ansätze sind weder in der Lage mit dem Wachstum der verfügbaren biologischen Daten noch mit der Komplexität der Analysetechniken Schritt zu halten. Workflows erlauben diesem Trend durch Parallelisierungsansätzen und verteilten Rechensystemen entgegenzuwirken. Ihre transparenten Abläufe, gegeben durch ihre klar definierten Strukturen, ebenso ihre Wiederholbarkeit, erfüllen die Standards der Reproduzierbarkeit, welche an wissenschaftliche Methoden gestellt werden. Eines der Ziele unserer Arbeit ist es Forschern beim Bedienen von Rechensystemen zu unterstützen, ohne dass Programmierkenntnisse notwendig sind. Dafür wurde eine Sammlung von Tools entwickelt, welche jedes Kommandozeilenprogramm in ein Workflowsystem integrieren kann. Ohne weitere Anpassungen kann unser Programm zwei weit verbreitete Workflowsysteme unterstützen. Unser modularer Entwurf erlaubt zudem Unterstützung für weitere Workflowmaschinen hinzuzufügen. Basierend auf der Bedeutung von frühen und robusten Workflowentwürfen, haben wir außerdem eine wohl etablierte Desktop–basierte Analyseplattform erweitert. Diese enthält über 2.000 Aufgaben, wobei jede als Baustein in einem Workflow fungiert. Die Plattform erlaubt einfache Entwicklung neuer Aufgaben und die Integration externer Kommandozeilenprogramme. In dieser Arbeit wurde ein Plugin zur Konvertierung entwickelt, welches nutzerfreundliche Mechanismen bereitstellt, um Workflows auf verteilten Hochleistungsrechensystemen auszuführen—eine Aufgabe, die sonst technische Kenntnisse erfordert, die gewöhnlich nicht zum Anforderungsprofil eines Lebenswissenschaftlers gehören. Unsere Konverter–Erweiterung generiert quasi identische Versionen desselben Workflows, welche im Anschluss auf leistungsfähigen Berechnungsressourcen ausgeführt werden können. Infolgedessen werden nicht nur die Möglichkeiten von verteilten hochperformanten Rechensystemen sowie die Bequemlichkeit eines für Desktopcomputer entwickelte Workflowsystems ausgenutzt, sondern zusätzlich werden Berechnungsbeschränkungen von Desktopcomputern und die steile Lernkurve, die mit dem Workflowentwurf auf verteilten Systemen verbunden ist, umgangen. Unser Konverter–Plugin hat sofortige Anwendung für Forscher. Wir zeigen dies in drei für die Lebenswissenschaften relevanten Anwendungsbeispielen: Strukturelle Bioinformatik, Immuninformatik, und Metabolomik.Recent decades have witnessed an exponential increase of available biological data due to advances in key technologies for life sciences. Specialized computing resources and scripting skills are now required to deliver results in a timely fashion: desktop computers or monolithic approaches can no longer keep pace with neither the growth of available biological data nor the complexity of analysis techniques. Workflows offer an accessible way to counter against this trend by facilitating parallelization and distribution of computations. Given their structured and repeatable nature, workflows also provide a transparent process to satisfy strict reproducibility standards required by the scientific method. One of the goals of our work is to assist researchers in accessing computing resources without the need for programming or scripting skills. To this effect, we created a toolset able to integrate any command line tool into workflow systems. Out of the box, our toolset supports two widely–used workflow systems, but our modular design allows for seamless additions in order to support further workflow engines. Recognizing the importance of early and robust workflow design, we also extended a well–established, desktop–based analytics platform that contains more than two thousand tasks (each being a building block for a workflow), allows easy development of new tasks and is able to integrate external command line tools. We developed a converter plug–in that offers a user–friendly mechanism to execute workflows on distributed high–performance computing resources—an exercise that would otherwise require technical skills typically not associated with the average life scientist's profile. Our converter extension generates virtually identical versions of the same workflows, which can then be executed on more capable computing resources. That is, not only did we leverage the capacity of distributed high–performance resources and the conveniences of a workflow engine designed for personal computers but we also circumvented computing limitations of personal computers and the steep learning curve associated with creating workflows for distributed environments. Our converter extension has immediate applications for researchers and we showcase our results by means of three use cases relevant for life scientists: structural bioinformatics, immunoinformatics and metabolomics
    • …
    corecore