24 research outputs found

    Autogen: The Mars 2001 Odyssey and the Autogen Process

    Get PDF
    In many deep space and interplanetary missions, it is widely recognized that the scheduling of many commands to operate a spacecraft can follow very regular patterns. In these instances, it is greatly desired to convert the knowledge of how commands are scheduled into algorithms in order to automate the development of command sequences. In doing so, it is possible to dramatically reduce the number of people and work-hours that are required to develop a sequence. The development of the autogen process for the Mars 2001 Odyssey spacecraft is one implementation of this concept. It combines robust scheduling algorithms with software that is compatible with preexisting uplink software, and literally reduced the duration of some sequence generation processes from weeks to minutes. This paper outlines the autogen tools and processes and describes how they have been implemented for the various phases of the Mars 2001 Odyssey mission

    The Demand Absorption Coefficient of a Production Line

    Get PDF
    AbstractIn this article, the demand absorption coefficient is proposed as a measure to quantify the degree of flexibility of a process against the variations of its environment in a context of robust planning. The demand absorption coefficient is defined as the slope on the function relating throughput and demand rates. This coefficient measures how demand disturbances are translated into output production rates depending on the capacity and inventory buffers of the production system. Models of serial production lines with different numbers of machines, capacities and sizes of buffers are solved by means of a decomposition method using phase-type distributions to study the behavior of this coefficient

    Risk Intelligence: Making Profit from Uncertainty in Data Processing System

    Get PDF
    In extreme scale data processing systems, fault tolerance is an essential and indispensable part. Proactive fault tolerance scheme (such as the speculative execution in MapReduce framework) is introduced to dramatically improve the response time of job executions when the failure becomes a norm rather than an exception. Efficient proactive fault tolerance schemes require precise knowledge on the task executions, which has been an open challenge for decades. To well address the issue, in this paper we design and implement RiskI, a profile-based prediction algorithm in conjunction with a riskaware task assignment algorithm, to accelerate task executions, taking the uncertainty nature of tasks into account. Our design demonstrates that the nature uncertainty brings not only great challenges, but also new opportunities. With a careful design, we can benefit from such uncertainties. We implement the idea in Hadoop 0.21.0 systems and the experimental results show that, compared with the traditional LATE algorithm, the response time can be improved by 46% with the same system throughput

    Master of Science

    Get PDF
    thesisAdvances in silicon photonics are enabling hybrid integration of optoelectronic circuits alongside current complementary metal-oxide-semiconductor (CMOS) technologies. To fully exploit the capability of this integration, it is important to explore the effects of thermal gradients on optoelectronic devices. The sensitivity of optical components to temperature variation gives rise to design issues in silicon on insulator (SOI) optoelectronic technology. The thermo-electric effect becomes problematic with the integration of hybrid optoelectronic systems, where heat is generated from electrical components. Through the thermo-optic effect, the optical signals are in turn affected and compensation is necessary. To improve the capability of optical SOI designs, optical-wave-simulation models and the characteristic thermal operating environment need to be integrated to ensure proper operation. In order to exploit the potential for compensation by virtue of resynthesis, temperature characterization on a system level is required. Thermal characterization within the flow of physical design automation tools for hybrid optoelectronic technology enables device resynthesis and validation at a system level. Additionally, thermally-aware routing and placement would be possible. A simplified abstraction will help in the active design process, within the contemporary computer-aided design (CAD) flow when designing optoelectronic features. This thesis investigates an abstraction model to characterize the effect of a temperature gradient on optoelectronic circuit operation. To make the approach scalable, reduced order computations are desired that effectively model the effect of temperature on an optoelectronic layout; this is achieved using an electrical analogy to heat flow. Given an optoelectronic circuit, using a thermal resistance network to abstract thermal flow, we compute the temperature distribution throughout the layout. Subsequently, we show how this thermal distribution across the optoelectronic system layout can be integrated within optoelectronic device- and system-level analysis tools

    Survey on job scheduling mechanisms in grid environment

    Get PDF
    Grid systems provide geographically distributed resources for both computational intensive and data-intensive applications.These applications generate large data sets.However, the high latency imposed by the underlying technologies; upon which the grid system is built (such as the Internet and WWW), induced impediment in the effective access to such huge and widely distributed data.To minimize this impediment, jobs need to be scheduled across grid environments to achieve efficient data access.Scheduling multiple data requests submitted by grid users onto the grid environment is NP-hard.Thus, there is no best scheduling algorithm that cuts across all grids computing environments.Job scheduling is one of the key research area in grid computing.In the recent past many researchers have proposed different mechanisms to help scheduling of user jobs in grid systems.Some characteristic features of the grid components; such as machines types and nature of jobs at hand means that a choice needs to be made for an appropriate scheduling algorithm to march a given grid environment.The aim of scheduling is to achieve maximum possible system throughput and to match the application needs with the available computing resources.This paper is motivated by the need to explore the various job scheduling techniques alongside their area of implementation.The paper will systematically analyze the strengths and weaknesses of some selected approaches in the area of grid jobs scheduling.This helps researchers better understand the concept of scheduling, and can contribute in developing more efficient and practical scheduling algorithms.This will also benefit interested researchers to carry out further work in this dynamic research area

    Modeling Irregular Kernels of Task-based codes: Illustration with the Fast Multipole Method

    Get PDF
    The significant increase of the hardware complexity that occurred in the last few years led the high performance community to design many scientific libraries according to a task-based parallelization. The modeling of the performance of the individual tasks (or kernels) they are composed of is crucial for facing multiple challenges as diverse as performing accurate performance predictions, designing robust scheduling algorithms, tuning the applications, etc. Fine-grain modeling such as emulation and cycle-accurate simulation may lead to very accurate results. However, not only their high cost may be prohibitive but they furthermore require a high fidelity modeling of the processor, which makes them hard to deploy in practice. In this paper, we propose an alternative coarse-grain, empirical methodology oblivious to both the target code and the hardware architecture, which leads to robust and accurate timing predictions. We illustrate our approach with a task-based Fast Multipole Method (FMM) algorithm, whose kernels are highly irregular, implemented in the \scalfmm library on top of the starpu task-based runtime system and the simgrid simulator.L'augmentation significative de la complexité matérielle qui s'est produite ces quelques dernières années a amené la communauté de calcul haute performance à mettre au point de nombreuses bibliothèques scientifiques sur le principe d'une parallélisation à base de tâches. La modélisation de la performance des tâches individuelles (ou noyaux) qui les composent est cruciale pour faire face aux multiples challenges aussi variés que la réalisation de prédictions de performance précises, la mise au point d'algorithme d'ordonnancement robustes, l'optimisation des applications, etc. La modélisation à grain fin tel que l'émulation et la simulation à la précision du cycle peut permettre des résultats très précis. Toutefois, non seulement leur coût élevé peut être prohibitif mais elles requièrent de surcroît une modélisation très fidèle du processeur, ce qui les rend difficiles à déployer en pratique. Dans ce papier, nous proposons une méthodologie alternative, à plus gros grain, empirique, transparente à la fois pour le code et l'architecture cibles, ce qui permet des prédictions robustes et précises. Nous illustrons notre approche avec une méthode méthode multipolaire rapide (FMM) à base de tâches, dont les noyaux sont hautement irréguliers, implémentée dans la librairie ScalFMM au-dessus du moteur d'exécution StarPU et du simulateur SimGrid
    corecore