522,863 research outputs found

    An integrated dexterous robotic testbed for space applications

    Get PDF
    An integrated dexterous robotic system was developed as a testbed to evaluate various robotics technologies for advanced space applications. The system configuration consisted of a Utah/MIT Dexterous Hand, a PUMA 562 arm, a stereo vision system, and a multiprocessing computer control system. In addition to these major subsystems, a proximity sensing system was integrated with the Utah/MIT Hand to provide capability for non-contact sensing of a nearby object. A high-speed fiber-optic link was used to transmit digitized proximity sensor signals back to the multiprocessing control system. The hardware system was designed to satisfy the requirements for both teleoperated and autonomous operations. The software system was designed to exploit parallel processing capability, pursue functional modularity, incorporate artificial intelligence for robot control, allow high-level symbolic robot commands, maximize reusable code, minimize compilation requirements, and provide an interactive application development and debugging environment for the end users. An overview is presented of the system hardware and software configurations, and implementation is discussed of subsystem functions

    Throughput analysis of the IEEE 802.4 token bus standard under heavy load

    Get PDF
    It has become clear in the last few years that there is a trend towards integrated digital services. Parallel to the development of public Integrated Services Digital Network (ISDN) is service integration in the local area (e.g., a campus, a building, an aircraft). The types of services to be integrated depend very much on the specific local environment. However, applications tend to generate data traffic belonging to one of two classes. According to IEEE 802.4 terminology, the first major class of traffic is termed synchronous, such as packetized voice and data generated from other applications with real-time constraints, and the second class is called asynchronous which includes most computer data traffic such as file transfer or facsimile. The IEEE 802.4 token bus protocol which was designed to support both synchronous and asynchronous traffic is examined. The protocol is basically a timer-controlled token bus access scheme. By a suitable choice of the design parameters, it can be shown that access delay is bounded for synchronous traffic. As well, the bandwidth allocated to asynchronous traffic can be controlled. A throughput analysis of the protocol under heavy load with constant channel occupation of synchronous traffic and constant token-passing times is presented

    Incremental compilation and deployment for OutSystems Platform

    Get PDF
    OutSystems Platform is used to develop, deploy, and maintain enterprise web an mobile web applications. Applications are developed through a visual domain specific language, in an integrated development environment, and compiled to a standard stack of web technologies. In the platform’s core, there is a compiler and a deployment service that transform the visual model into a running web application. As applications grow, compilation and deployment times increase as well, impacting the developer’s productivity. In the previous model, a full application was the only compilation and deployment unit. When the developer published an application, even if he only changed a very small aspect of it, the application would be fully compiled and deployed. Our goal is to reduce compilation and deployment times for the most common use case, in which the developer performs small changes to an application before compiling and deploying it. We modified the OutSystems Platform to support a new incremental compilation and deployment model that reuses previous computations as much as possible in order to improve performance. In our approach, the full application is broken down into smaller compilation and deployment units, increasing what can be cached and reused. We also observed that this finer model would benefit from a parallel execution model. Hereby, we created a task driven Scheduler that executes compilation and deployment tasks in parallel. Our benchmarks show a substantial improvement of the compilation and deployment process times for the aforementioned development scenario

    Towards Intelligent Runtime Framework for Distributed Heterogeneous Systems

    Get PDF
    Scientific applications strive for increased memory and computing performance, requiring massive amounts of data and time to produce results. Applications utilize large-scale, parallel computing platforms with advanced architectures to accommodate their needs. However, developing performance-portable applications for modern, heterogeneous platforms requires lots of effort and expertise in both the application and systems domains. This is more relevant for unstructured applications whose workflow is not statically predictable due to their heavily data-dependent nature. One possible solution for this problem is the introduction of an intelligent Domain-Specific Language (iDSL) that transparently helps to maintain correctness, hides the idiosyncrasies of lowlevel hardware, and scales applications. An iDSL includes domain-specific language constructs, a compilation toolchain, and a runtime providing task scheduling, data placement, and workload balancing across and within heterogeneous nodes. In this work, we focus on the runtime framework. We introduce a novel design and extension of a runtime framework, the Parallel Runtime Environment for Multicore Applications. In response to the ever-increasing intra/inter-node concurrency, the runtime system supports efficient task scheduling and workload balancing at both levels while allowing the development of custom policies. Moreover, the new framework provides abstractions supporting the utilization of heterogeneous distributed nodes consisting of CPUs and GPUs and is extensible to other devices. We demonstrate that by utilizing this work, an application (or the iDSL) can scale its performance on heterogeneous exascale-era supercomputers with minimal effort. A future goal for this framework (out of the scope of this thesis) is to be integrated with machine learning to improve its decision-making and performance further. As a bridge to this goal, since the framework is under development, we experiment with data from Nuclear Physics Particle Accelerators and demonstrate the significant improvements achieved by utilizing machine learning in the hit-based track reconstruction process

    Parallel Processing of Image Segmentation Data Using Hadoop

    Get PDF
    The use of sequential programming is slowly getting replaced by distributed and parallel computing which is widely being used in computing industries to handle tasks with big data and various high-end computing applications comprising of huge image and video data banks. Moreover, image processing using parallel computation is also gaining momentum in today's technological era. Nowadays researchers are coming up with various methodologies to tackle high scale image processing applications by implementing parallel computing methodologies to carry out the specified image processing application task and simultaneously checking its performance against sequential programming. At the same time there are constraints on what can be done to maximize the task performance using high end multi-core CPU's with advanced buses and interconnects that offer high bandwidth with low system latency. It is to be noted that there is no availability of standardized image processing task which can be used to evaluate a single node system. In this paper, we propose an efficient parallel processing algorithm to perform the task of image segmentation with the foremost aim to analyze the threshold of data size at which the proposed method outperforms sequential programming method in terms of task execution time by analyzing the distribution of average CPU cores usage and its threads over the execution time. The proposed methodology could be useful for researchers, as it can perform multiple image segmentation in parallel, which can save a lot of time of the user. For the purpose of comparison, we also implemented the same image segmentation task using sequential method of programming in an integrated development environment platform

    Soleil-X: turbulence, particles, and radiation in the Regent programming language

    Get PDF
    The Predictive Science Academic Alliance Program (PSAAP) II at Stanford University is developing an exascale-ready multi-physics solver to investigate particle-laden turbulent flows in a radiation environment for solar energy receiver applications. In order to simulate the proposed concentrated particle-based receiver design three distinct but coupled physical phenomena must be modeled: fluid flows, Lagrangian particle dynamics, and the transport of thermal radiation. Therefore, three different physics solvers (fluid, particles, and radiation) must run concurrently with significant cross-communication in an integrated multi-physics simulation. However, each solver uses substantially different algorithms and data access patterns. Coordinating the overall data communication, computational load balancing, and scaling these different physics solvers together on modern massively parallel, heterogeneous high performance computing systems presents several major challenges. We have adopted the Legion programming system, via the Regent programming language, and its task parallel programming model to address these challenges. Our multi-physics solver Soleil-X is written entirely in the high level Regent programming language and is one of the largest and most complex applications written in Regent to date. At this workshop we will give an overview of the software architecture of Soleil-X as well as discuss how our multi-physics solver was designed to use the task parallel programming model provided by Legion. We will also discuss the development experience, scaling, performance, portability, and multi-physics simulation results.Postprint (published version

    Eclipse Parallel Tools Platform (PTP)

    Get PDF
    The Eclipse Parallel Tools Platform (PTP) combines tools for coding, static analysis, revision control, refactoring, debugging, job submission, and more, into a best-practice integrated environment for increasing developer productivity. Leveraging the successful open-source Eclipse platform, PTP helps manage the complexity of scientific code development and optimization on diverse platforms including systems managed by LoadLeveler, Torque or LSF, and provides tools to gain insights into complex code that is otherwise difficult to attain. This talk gives a brief overview of PTP's functionality for supporting the development of parallel applications. Its main components are presented by giving a demonstration on how to apply PTP for coding parallel codes, job submission and monitoring the supercomputer status. The system monitoring is based on the monitoring application LLview developed by the JĂĽlich Supercomputing Centre (JSC). In order to illustrate the development process of PTP's system monitoring as a set of Eclipse Plug-Ins an introduction to the development of Eclipse Plug-Ins is given, which allows for extending any part of Eclipse with new functionality

    Energy-Aware Development and Labeling for Mobile Applications

    Get PDF
    Today, mobile devices such as smart phones and tablets have become ubiquitous and are used everywhere. Millions of software applications can be purchased and installed on these devices, customizing them to personal interests and needs. However, the frequent use of mobile devices has let a new problem become omnipresent: their limited operation time, due to their limited energy capacities. Although energy consumption can be considered as being a hardware problem, the amount of energy required by today’s mobile devices highly depends on their current workloads, being highly influenced by the software running on them. Thus, although only hardware modules are consuming energy, operating systems, middleware services, and mobile applications highly influence the energy consumption of mobile devices, depending on how efficient they use and control hardware modules. Nevertheless, most of today’s mobile applications totally ignore their influence on the devices’ energy consumption, leading to energy wastes, shorter operation times, and thus, frustrated application users. A major reason for this energy-unawareness is the lack for appropriate tooling for the development of energy-aware mobile applications. As many mobile applications are today behaving energy-unaware and various mobile applications providing similar services exist, mobile application users aim to optimize their devices by installing applications being known as energy-saving or energy-aware; meaning that they consume less energy while providing the same services as their competitors. However, scarce information on the applications’ energy usage is available and, thus, users are forced to install and try many applications manually, before finding the applications fulfilling their personal functional, non-functional, and energy requirements. This thesis addresses the lack of tooling for the development of energy-aware mobile applications and the lack of comparability of mobile applications in terms of energy-awareness with the following two contributions: First, it proposes JouleUnit, an energy profiling and testing framework using unit-tests for the execution of application workloads while profiling their energy consumption in parallel. By extending a well-known testing concept and providing tooling integrated into the development environment Eclipse, JouleUnit requires a low learning curve for the integration into existing development and testing processes. Second, for the comparability of mobile applications in terms of energy efficiency, this thesis proposes an energy benchmarking and labeling service. Mobile applications belonging to the same usage domain are energy-profiled while executing a usage-domain specific benchmark in parallel. Thus, their energy consumption for specific use cases can be evaluated and compared afterwards. To abstract and summarize the profiling results, energy labels are derived that summarize the applications’ energy consumption over all evaluated use cases as a simple energy grade, ranging from A to G. Besides, users can decide how to weigh specific use cases for the computation of energy grades, as it is likely that different users use the same applications differently. The energy labeling service has been implemented for Android applications and evaluated for three different usage domains (being web browsers, email clients, and live wallpapers), showing that different mobile applications indeed differ in their energy consumption for the same services and, thus, their comparison is both possible and sensible. To the best of my knowledge, this is the first approach providing mobile application users comparable energy consumption information on mobile applications without installing and testing them on their own mobile devices
    • …
    corecore