2,417 research outputs found

    The EnMAP user interface and user request scenarios

    Get PDF
    EnMAP (Environmental Mapping and Analysis Program) is a German hyperspectral satellite mission providing high quality hyperspectral image data on a timely and frequent basis. Main objective is to investigate a wide range of ecosystem parameters encompassing agriculture, forestry, soil and geological environments, coastal zones and inland waters. The EnMAP Ground Segment will be designed, implemented and operated by the German Aerospace Center (DLR). The Applied Remote Sensing Cluster (DFD) at DLR is responsible for the establishment of a user interface. This paper provides details on the concept, design and functionality of the EnMAP user interface and a first analysis about potential user scenarios. The user interface consists of two online portals. The EnMAP portal (www.enmap.org) provides general EnMAP mission information. It is the central entry point for all international users interested to learn about the EnMAP mission, its objectives, status, data products and processing chains. The EnMAP Data Access Portal (EDAP) is the entry point for any EnMAP data requests and comprises a set of service functions offered for every registered user. The scientific user is able to task the EnMAP HSI for Earth observations by providing tasking parameters, such as area, temporal aspects and allowed tilt angle. In the second part of that paper different user scenarios according to the previously explained tasking parameters are presented and discussed in terms of their feasibility for scientific projects. For that purpose, a prototype of the observation planning tool enabling visualization of different user request scenarios was developed. It can be shown, that the number of data takes in a certain period of time increases with the latitude of the observation area. Further, the observation area can differ with the tilt angle of the satellite. Such findings can be crucial for the planning of remote sensing based projects, especially for those investigating ecosystem gradients in the time domain

    European ALMA operations: the interaction with and support to the users

    Full text link
    The Atacama Large Millimetre/submillimetre Array (ALMA) is one of the largest and most complicated observatories ever built. Constructing and operating an observatory at high altitude (5000m) in a cost effective and safe manner, with minimal effect on the environment creates interesting challenges. Since the array will have to adapt quickly to prevailing weather conditions, ALMA will be operated exclusively in service mode. By the time of full science operations, the fundamental ALMA data product shall be calibrated, deconvolved data cubes and images, but raw data and data reduction software will be made available to users as well. User support is provided by the ALMA Regional Centres (ARCs) located in Europe, North America and Japan. These ARCs constitute the interface between the user community and the ALMA observatory in Chile. For European users the European ARC is being set up as a cluster of nodes located throughout Europe, with the main centre at the ESO Headquarters in Garching. The main centre serves as the access portal and in synergy with the distributed network of ARC nodes, the main aim of the ARC is to optimize the ALMA science output and to fully exploit this unique and powerful facility. The aim of this article is to introduce the process of proposing for observing time, subsequent execution of the observations, obtaining and processing of the data in the ALMA epoch. The complete end-to-end process of the ALMA data flow from the proposal submission to the data delivery is described.Comment: 7 pages, three figure

    Resource provisioning in Science Clouds: Requirements and challenges

    Full text link
    Cloud computing has permeated into the information technology industry in the last few years, and it is emerging nowadays in scientific environments. Science user communities are demanding a broad range of computing power to satisfy the needs of high-performance applications, such as local clusters, high-performance computing systems, and computing grids. Different workloads are needed from different computational models, and the cloud is already considered as a promising paradigm. The scheduling and allocation of resources is always a challenging matter in any form of computation and clouds are not an exception. Science applications have unique features that differentiate their workloads, hence, their requirements have to be taken into consideration to be fulfilled when building a Science Cloud. This paper will discuss what are the main scheduling and resource allocation challenges for any Infrastructure as a Service provider supporting scientific applications

    Evaluation of a Telerehabilitation System for Community-Based Rehabilitation

    Get PDF
    The use of web-based portals, while increasing in popularity in the fields of medicine and research, are rarely reported on in community-based rehabilitation programs.  A program within the Pennsylvania Office of Vocational Rehabilitation’s Hiram G. Andrews Center, the Cognitive Skills Enhancement Program (CSEP), sought to enhance organization of program and participant information and communication between part- and full-time employees, supervisors and consultants. A telerehab system was developed consisting of (1) a web-based portal to support a variety of clinical activities and (2) the Versatile Integrated System for Telerehabilitation (VISyTER) video-conferencing system to support the collaboration and delivery of rehabilitation services remotely.  This descriptive evaluation examines the usability of the telerehab system incorporating both the portal and VISyTER. Telerehab system users include CSEP staff members from three geographical locations and employed by two institutions. The IBM After-Scenario Questionnaire (ASQ) and Post-Study System Usability Questionnaire (PSSUQ), the Telehealth Usability Questionnaire (TUQ), and two demographic surveys were administered to gather both objective and subjective information. Results showed generally high levels of usability.  Users commented that the telerehabilitation system improved communication, increased access to information, improved speed of completing tasks, and had an appealing interface. Areas where users would like to see improvements, including ease of accessing/editing documents and searching for information, are discussed.        

    A Taxonomy of Data Grids for Distributed Data Sharing, Management and Processing

    Full text link
    Data Grids have been adopted as the platform for scientific communities that need to share, access, transport, process and manage large data collections distributed worldwide. They combine high-end computing technologies with high-performance networking and wide-area storage management techniques. In this paper, we discuss the key concepts behind Data Grids and compare them with other data sharing and distribution paradigms such as content delivery networks, peer-to-peer networks and distributed databases. We then provide comprehensive taxonomies that cover various aspects of architecture, data transportation, data replication and resource allocation and scheduling. Finally, we map the proposed taxonomy to various Data Grid systems not only to validate the taxonomy but also to identify areas for future exploration. Through this taxonomy, we aim to categorise existing systems to better understand their goals and their methodology. This would help evaluate their applicability for solving similar problems. This taxonomy also provides a "gap analysis" of this area through which researchers can potentially identify new issues for investigation. Finally, we hope that the proposed taxonomy and mapping also helps to provide an easy way for new practitioners to understand this complex area of research.Comment: 46 pages, 16 figures, Technical Repor

    A Survey of Resource Management Challenges in Multi-cloud Environment: Taxonomy and Empirical Analysis

    Get PDF
    Cloud computing has seen a great deal of interest by researchers and industrial firms since its first coined. Different perspectives and research problems, such as energy efficiency, security and threats, to name but a few, have been dealt with and addressed from cloud computing perspective. However, cloud computing environment still encounters a major challenge of how to allocate and manage computational resources efficiently. Furthermore, due to the different architectures and cloud computing networks and models used (i.e., federated clouds, VM migrations, cloud brokerage), the complexity of resource management in the cloud has been increased dramatically. Cloud providers and service consumers have the cloud brokers working as the intermediaries between them, and the confusion among the cloud computing parties (consumers, brokers, data centres and service providers) on who is responsible for managing the request of cloud resources is a key issue. In a traditional scenario, upon renting the various cloud resources from the providers, the cloud brokers engage in subletting and managing these resources to the service consumers. However, providers’ usually deal with many brokers, and vice versa, and any dispute of any kind between the providers and the brokers will lead to service unavailability, in which the consumer is the only victim. Therefore, managing cloud resources and services still needs a lot of attention and effort. This paper expresses the survey on the systems of the cloud brokerage resource management issues in multi-cloud environments

    A Process Improvement Project: Demonstrating a Patient Portal to Increase Enrollment and Use in an Underserved Population with Chronic Illness

    Get PDF
    High risk, high cost chronic conditions such as diabetes, asthma, and congestive heart failure are prevalent in the United States. Nearly half of all Americans have at least one chronic condition (Centers for Disease Control and Prevention, 2009). Almost four-fifths of total health care spending in the U.S. is related to high risk, chronic conditions (Baker, Johnson, Macaulay, & Birnbaum, 2011). The use of patient portals in ambulatory care may be an avenue toward improving chronic disease management. Portals can be used by patients to schedule appointments, send secure messages to their providers, request medication refills, review lab and test results, make payments, and other activities. The purpose of this quality improvement project was to evaluate whether combining portal demonstration to patients during clinic visits with immediate enrollment would increase the use of a portal in a safety-net primary care clinic. Most of the participants (N = 51) were Caucasian aged 38 to 47 years, high school graduates, and diabetic with no comorbid conditions. Over half were daily internet users. Participants’ use of the portal was recorded over three months. The use rate improved from none prior to portal demonstration to 39.2%. The demonstration was timed and a cost analysis was performed to present a sustainability plan for demonstration adoption in the primary care clinic. Increased portal use rates may over time equate to improved patient-provider communication and increased patient self-care, leading to improved chronic condition management

    A software approach to enhancing quality of service in internet commerce

    Get PDF
    • 

    corecore