26,444 research outputs found

    Community Development Evaluation Storymap and Legend

    Get PDF
    Community based organizations, funders, and intermediary organizations working in the community development field have a shared interest in building stronger organizations and stronger communities. Through evaluation these organizations can learn how their programs and activities contribute to the achievement of these goals, and how to improve their effectiveness and the well-being of their communities. Yet, evaluation is rarely seen as part of a non-judgemental organizational learning process. Instead, the term "evaluation" has often generated anxiety and confusion. The Community Development Storymap project is a response to those concerns.Illustrations found in this document were produced by Grove Consultants

    Contributing to VRPN with a new server for haptic devices (ext. version)

    Get PDF
    This article is an extended version of the poster paper: Cuevas-Rodriguez, M., Gonzalez-Toledo D., Molina-Tanco, L., Reyes-Lecuona A., 2015, November. “Contributing to VRPN with a new server for haptic devices”. In Proceedings of the ACM symposium on Virtual reality software and technology. ACM.http://dx.doi.org/10.1145/2821592.2821639VRPN is a middleware to access Virtual Reality peripherals. VRPN standard distribution supports Geomagic® (formerly Phantom) haptic devices through the now superseded GHOST library. This paper presents VRPN OpenHaptics Server, a contribution to VRPN library that fully reimplements VRPN support of Geomagic Haptic Devices. The implementation is based on the OpenHaptics v3.0 HLAPI layer, which supports all Geomagic Haptic Devices. We present the architecture of the contributed server, a detailed description of the offered API and an analysis of its performance in a set of example scenarios.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Grid Infrastructure for Domain Decomposition Methods in Computational ElectroMagnetics

    Get PDF
    The accurate and efficient solution of Maxwell's equation is the problem addressed by the scientific discipline called Computational ElectroMagnetics (CEM). Many macroscopic phenomena in a great number of fields are governed by this set of differential equations: electronic, geophysics, medical and biomedical technologies, virtual EM prototyping, besides the traditional antenna and propagation applications. Therefore, many efforts are focussed on the development of new and more efficient approach to solve Maxwell's equation. The interest in CEM applications is growing on. Several problems, hard to figure out few years ago, can now be easily addressed thanks to the reliability and flexibility of new technologies, together with the increased computational power. This technology evolution opens the possibility to address large and complex tasks. Many of these applications aim to simulate the electromagnetic behavior, for example in terms of input impedance and radiation pattern in antenna problems, or Radar Cross Section for scattering applications. Instead, problems, which solution requires high accuracy, need to implement full wave analysis techniques, e.g., virtual prototyping context, where the objective is to obtain reliable simulations in order to minimize measurement number, and as consequence their cost. Besides, other tasks require the analysis of complete structures (that include an high number of details) by directly simulating a CAD Model. This approach allows to relieve researcher of the burden of removing useless details, while maintaining the original complexity and taking into account all details. Unfortunately, this reduction implies: (a) high computational effort, due to the increased number of degrees of freedom, and (b) worsening of spectral properties of the linear system during complex analysis. The above considerations underline the needs to identify appropriate information technologies that ease solution achievement and fasten required elaborations. The authors analysis and expertise infer that Grid Computing techniques can be very useful to these purposes. Grids appear mainly in high performance computing environments. In this context, hundreds of off-the-shelf nodes are linked together and work in parallel to solve problems, that, previously, could be addressed sequentially or by using supercomputers. Grid Computing is a technique developed to elaborate enormous amounts of data and enables large-scale resource sharing to solve problem by exploiting distributed scenarios. The main advantage of Grid is due to parallel computing, indeed if a problem can be split in smaller tasks, that can be executed independently, its solution calculation fasten up considerably. To exploit this advantage, it is necessary to identify a technique able to split original electromagnetic task into a set of smaller subproblems. The Domain Decomposition (DD) technique, based on the block generation algorithm introduced in Matekovits et al. (2007) and Francavilla et al. (2011), perfectly addresses our requirements (see Section 3.4 for details). In this chapter, a Grid Computing infrastructure is presented. This architecture allows parallel block execution by distributing tasks to nodes that belong to the Grid. The set of nodes is composed by physical machines and virtualized ones. This feature enables great flexibility and increase available computational power. Furthermore, the presence of virtual nodes allows a full and efficient Grid usage, indeed the presented architecture can be used by different users that run different applications

    Past and Future Operations Concepts of NASA's Earth Science Data and Information System

    Get PDF
    NASA committed to support the collection and distribution of Earth science data to study global change in the 1990's. A series of Earth science remote sensing satellites, the Earth Observing System (EOS), was to be the centerpiece. The concept for the science data system, the EOS Data and Information System (EOSDIS), created new challenges in the data processing of multiple satellite instrument observations for climate research and in the distribution of global-coverage remote sensor products to a large and growing science research community. EOSDIS was conceived to facilitate easy access to EOS science data for a wide heterogeneous national and international community of users. EOSDIS was to provide a spectrum of services designed for research scientists working on NASA focus areas but open to the general public and international science community. EOSDIS would give researchers tools and assistance in searching, selecting and acquiring data, allowing them to focus on Earth science climate research rather than complex product generation. Goals were to promote exchange of data and research results and expedite development of new geophysical algorithms. The system architecture had to accommodate a diversity of data types, data acquisition and product generation operations, data access requirements and different centers of science discipline expertise. Steps were taken early to make EOSDIS flexible by distributing responsibility for basic services. Many of the system operations concept decisions made in the 90s continued to this day. Once implemented, concepts such as the EOSDIS data model played a critical role developing effective data services, now a hallmark of EOSDIS. In other cases, EOSDIS architecture has evolved to enable more efficient operations, taking advantage of new technology and thereby shifting more resources on data services and less on operating and maintaining infrastructure. In looking to the future, EOSDIS may be able to take advantage of commercial compute environments for infrastructure and further enable large scale climate research. In this presentation, we will discuss key EOSDIS operations concepts from the 1990's, how they were implemented and evolved in the architecture, and look at concepts and architectural challenges for EOSDIS operations utilizing commercial cloud services

    IMPLEMENTATION OF A LOCALIZATION-ORIENTED HRI FOR WALKING ROBOTS IN THE ROBOCUP ENVIRONMENT

    Get PDF
    This paper presents the design and implementation of a human–robot interface capable of evaluating robot localization performance and maintaining full control of robot behaviors in the RoboCup domain. The system consists of legged robots, behavior modules, an overhead visual tracking system, and a graphic user interface. A human–robot communication framework is designed for executing cooperative and competitive processing tasks between users and robots by using object oriented and modularized software architecture, operability, and functionality. Some experimental results are presented to show the performance of the proposed system based on simulated and real-time information. </jats:p

    ISIS and META projects

    Get PDF
    The ISIS project has developed a new methodology, virtual synchony, for writing robust distributed software. High performance multicast, large scale applications, and wide area networks are the focus of interest. Several interesting applications that exploit the strengths of ISIS, including an NFS-compatible replicated file system, are being developed. The META project is distributed control in a soft real-time environment incorporating feedback. This domain encompasses examples as diverse as monitoring inventory and consumption on a factory floor, and performing load-balancing on a distributed computing system. One of the first uses of META is for distributed application management: the tasks of configuring a distributed program, dynamically adapting to failures, and monitoring its performance. Recent progress and current plans are reported
    corecore