2,975 research outputs found

    Design and analysis of target-sensitive real-time systems

    Get PDF
    A significant number of real-time control applications include computational activities where the results have to be delivered at precise instants, rather than within a deadline. The performance of such systems significantly degrades if outputs are generated before or after the desired target time. This work presents a general methodology that can be used to design and analyze target-sensitive applications in which the timing parameters of the computational activities are tightly coupled with the physical characteristics of the system to be controlled. For the sake of clarity, the proposed methodology is illustrated through a sample case study used to show how to derive and verify real-time constraints from the mission requirements. Software implementation issues necessary to map the computational activities into tasks running on a real-time kernel are also discussed to identify the kernel mechanisms necessary to enforce timing constraints and analyze the feasibility of the application. A set of experiments are finally presented with the purpose of validating the proposed methodology

    Utility-Aware Scheduling of Stochastic Real-Time Systems

    Get PDF
    Time utility functions offer a reasonably general way to describe the complex timing constraints of real-time and cyber-physical systems. However, utility-aware scheduling policy design is an open research problem. In particular, scheduling policies that optimize expected utility accrual are needed for real-time and cyber-physical domains. This dissertation addresses the problem of utility-aware scheduling for systems with periodic real-time task sets and stochastic non-preemptive execution intervals. We model these systems as Markov Decision Processes. This model provides an evaluation framework by which different scheduling policies can be compared. By solving the Markov Decision Process we can derive value-optimal scheduling policies for moderate sized problems. However, the time and memory complexity of computing and storing value-optimal scheduling policies also necessitates the exploration of other more scalable solutions. We consider heuristic schedulers, including a generalization we have developed for the existing Utility Accrual Packet Scheduling Algorithm. We compare several heuristics under soft and hard real-time conditions, different load conditions, and different classes of time utility functions. Based on these evaluations we present guidelines for which heuristics are best suited to particular scheduling criteria. Finally, we address the memory complexity of value-optimal scheduling, and examine trade-offs between optimality and memory complexity. We show that it is possible to derive good low complexity scheduling decision functions based on a synthesis of heuristics and reduced-memory approximations of the value-optimal scheduling policy

    Space programs summary no. 37-60, volume 2, for the period 1 September to 31 October 1969. The deep space network

    Get PDF
    Telemetry and ground support equipment design and developments for Deep Space Networ

    Manoeuvre Planning Architecture for the Optimisation of Spacecraft Formation Flying Reconfiguration Manoeuvres

    Get PDF
    Formation flying of multiple spacecraft collaborating toward the same goal is fast becoming a reality for space mission designers. Often the missions require the spacecraft to perform translational manoeuvres relative to each other to achieve some mission objective. These manoeuvres need to be planned to ensure the safety of the spacecraft in the formation and to optimise fuel management throughout the fleet. In addition to these requirements is it desirable for this manoeuvre planning to occur autonomously within the fleet to reduce operations cost and provide greater planning flexibility for the mission. One such mission that would benefit from this type of manoeuvre planning is the European Space Agency’s DARWIN mission, designed to search for extra-solar Earth-like planets using separated spacecraft interferometry. This thesis presents a Manoeuvre Planning Architecture for the DARWIN mission. The design of the Architecture involves identifying and conceptualising all factors affecting the execution of formation flying manoeuvres at the Sun/Earth libration point L2. A systematic trade-off analysis of these factors is performed and results in a modularised Manoeuvre Planning Architecture for the optimisation of formation flying reconfiguration manoeuvres. The Architecture provides a means for DARWIN to autonomously plan manoeuvres during the reconfiguration mode of the mission. The Architecture consists of a Science Operations Module, a Position Assignment Module, a Trajectory Design Module and a Station-keeping Module that represents a multiple multi-variable optimisation approach to the formation flying manoeuvre planning problem. The manoeuvres are planned to incorporate target selection for maximum science returns, collision avoidance, thruster plume avoidance, manoeuvre duration minimisation and manoeuvre fuel management (including fuel consumption minimisation and formation fuel balancing). With many customisable variables the Architecture can be tuned to give the best performance throughout the mission duration. The implementation of the Architecture highlights the importance of planning formation flying reconfiguration manoeuvres. When compared with a benchmark manoeuvre planning strategy the Architecture demonstrates a performance increase of 27% for manoeuvre scheduling and fuel savings of 40% over a fifty target observation tour. The Architecture designed in this thesis contributes to the field of spacecraft formation flying analysis on various levels. First, the manoeuvre planning is designed at the mission level with considerations for mission operations and station-keeping included in the design. Secondly, the requirements analysis and implementation of Science Operation Module represent a unique insight into the complexity of observation scheduling for exo-planet analysis missions and presents a robust method for autonomously optimising that scheduling. Thirdly, in-depth analyses are performed on DARWIN-based modifications of existing manoeuvre optimisation strategies identifying their strengths and weaknesses and ways to improve them. Finally, though not implemented in this thesis, the design of a Station-keeping Module is provided to add station-keeping optimisation functionality to the Architecture

    Provendo robustez a escalonadores de workflows sensíveis às incertezas da largura de banda disponível

    Get PDF
    Orientadores: Edmundo Roberto Mauro Madeira, Luiz Fernando BittencourtTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Para que escalonadores de aplicações científicas modeladas como workflows derivem escalonamentos eficientes em nuvens híbridas, é necessário que se forneçam, além da descrição da demanda computacional desses aplicativos, as informações sobre o poder de computação dos recursos disponíveis, especialmente aqueles dados relacionados com a largura de banda disponível. Entretanto, a imprecisão das ferramentas de medição fazem com que as informações da largura de banda disponível fornecida aos escalonadores difiram dos valores reais que deveriam ser considerados para se obter escalonamentos quase ótimos. Escalonadores especialmente projetados para nuvens híbridas simplesmente ignoram a existência de tais imprecisões e terminam produzindo escalonamentos enganosos e de baixo desempenho, o que os tornam sensíveis às informações incertas. A presente Tese introduz um procedimento pró-ativo para fornecer um certo nível de robustez a escalonamentos derivados de escalonadores não projetados para serem robustos frente às incertezas decorrentes do uso de informações imprecisas dadas por ferramentas de medições de rede. Para tornar os escalonamentos sensíveis às incertezas em escalonamentos robustos às essas imprecisões, o procedimento propõe um refinamento (uma deflação) das estimativas da largura de banda antes de serem utilizadas pelo escalonador não robusto. Ao propor o uso de estimativas refinadas da largura de banda disponível, escalonadores inicialmente sensíveis às incertezas passaram a produzir escalonamentos com um certo nível de robustez às essas imprecisões. A eficácia e a eficiência do procedimento proposto são avaliadas através de simulação. Comparam-se, portanto, os escalonamentos gerados por escalonadores que passaram a usar o procedimento proposto com aqueles produzidos pelos mesmos escalonadores mas sem aplicar esse procedimento. Os resultados das simulações mostram que o procedimento proposto é capaz de prover robustez às incertezas da informação da largura de banda a escalonamentos derivados de escalonardes não robustos às tais incertezas. Adicionalmente, esta Tese também propõe um escalonador de aplicações científicas especialmente compostas por um conjunto de workflows. A novidade desse escalonador é que ele é flexível, ou seja, permite o uso de diferentes categorias de funções objetivos. Embora a flexibilidade proposta seja uma novidade no estado da arte, esse escalonador também é sensível às imprecisões da largura de banda. Entretanto, o procedimento mostrou-se capaz de provê-lo de robustez frente às tais incertezas. É mostrado nesta Tese que o procedimento proposto aumentou a eficácia e a eficiência de escalonadores de workflows não robustos projetados para nuvens híbridas, já que eles passaram a produzir escalonamentos com um certo nível de robustez na presença de estimativas incertas da largura de banda disponível. Dessa forma, o procedimento proposto nesta Tese é uma importante ferramenta para aprimorar os escalonadores sensíveis às estimativas incertas da banda disponível especialmente projetados para um ambiente computacional onde esses valores são imprecisos por natureza. Portanto, esta Tese propõe um procedimento que promove melhorias nas execuções de aplicações científicas em nuvens híbridasAbstract: To derive efficient schedules for the tasks of scientific applications modelled as workflows, schedulers need information on the application demands as well as on the resource availability, especially those regarding the available bandwidth. However, the lack of precision of bandwidth estimates provided by monitoring/measurement tools should be considered by the scheduler to achieve near-optimal schedules. Uncertainties of available bandwidth can be a result of imprecise measurement and monitoring network tools and/or their incapacity of estimating in advance the real value of the available bandwidth expected for the application during the scheduling step of the application. Schedulers specially designed for hybrid clouds simply ignore the inaccuracies of the given estimates and end up producing non-robust, low-performance schedules, which makes them sensitive to the uncertainties stemming from using these networking tools. This thesis introduces a proactive procedure to provide a certain level of robustness for schedules derived from schedulers that were not designed to be robust in the face of uncertainties of bandwidth estimates stemming from using unreliable networking tools. To make non-robust schedulers into robust schedulers, the procedure applies a deflation on imprecise bandwidth estimates before being used as input to non-robust schedulers. By proposing the use of refined (deflated) estimates of the available bandwidth, non-robust schedulers initially sensitive to these uncertainties started to produce robust schedules that are insensitive to these inaccuracies. The effectiveness and efficiency of the procedure in providing robustness to non-robust schedulers are evaluated through simulation. Schedules generated by induced-robustness schedulers through the use of the procedure is compared to that of produced by sensitive schedulers. In addition, this thesis also introduces a flexible scheduler for a special case of scientific applications modelled as a set of workflows grouped into ensembles. Although the novelty of this scheduler is the replacement of objective functions according to the user's needs, it is still a non-robust scheduler. However, the procedure was able to provide the necessary robustness for this flexible scheduler be able to produce robust schedules under uncertain bandwidth estimates. It is shown in this thesis that the proposed procedure enhanced the robustness of workflow schedulers designed especially for hybrid clouds as they started to produce robust schedules in the presence of uncertainties stemming from using networking tools. The proposed procedure is an important tool to furnish robustness to non-robust schedulers that are originally designed to work in a computational environment where bandwidth estimates are very likely to vary and cannot be estimated precisely in advance, bringing, therefore, improvements to the executions of scientific applications in hybrid cloudsDoutoradoCiência da ComputaçãoDoutor em Ciência da Computação2012/02778-6FAPES

    Towards integrating mobile devices into dew computing: A model for hour-wise prediction of energy availability

    Get PDF
    With self-provisioning of resources as premise, dew computing aims at providing computing services by minimizing the dependency over existing internetwork back-haul. Mobile devices have a huge potential to contribute to this emerging paradigm, not only due to their proximity to the end user, ever growing computing/storage features and pervasiveness, but also due to their capability to render services for several hours, even days,without being plugged to the electricity grid. Nonetheless,misusing the energy of their batteries can discourage owners to offer devices as resource providers in dew computing environments. Arguably, having accurate estimations of remaining battery would help to take better advantage of a device's computing capabilities. In this paper, we propose a model to estimate mobile devices battery availability by inspecting traces of real mobile device owner's activity and relevant device state variables. Themodel includes a feature extraction approach to obtain representative features/variables, and a prediction approach, based on regression models and machine learning classifiers. On average, the accuracy of our approach, measured with the mean squared error metric, overpasses the one obtained by a relatedwork. Prediction experiments at five hours ahead are performed over activity logs of 23 mobile users across several months.Fil: Longo, Mathias. University of Southern California; Estados UnidosFil: Hirsch Jofré, Matías Eberardo. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tandil. Instituto Superior de Ingeniería del Software. Universidad Nacional del Centro de la Provincia de Buenos Aires. Instituto Superior de Ingeniería del Software; ArgentinaFil: Mateos Diaz, Cristian Maximiliano. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tandil. Instituto Superior de Ingeniería del Software. Universidad Nacional del Centro de la Provincia de Buenos Aires. Instituto Superior de Ingeniería del Software; ArgentinaFil: Zunino Suarez, Alejandro Octavio. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Tandil. Instituto Superior de Ingeniería del Software. Universidad Nacional del Centro de la Provincia de Buenos Aires. Instituto Superior de Ingeniería del Software; Argentin

    Aerospace medicine and biology: A continuing bibliography with indexes (supplement 335)

    Get PDF
    This bibliography lists 143 reports, articles and other documents introduced into the NASA Scientific and Technical Information System during March, 1990. Subject coverage includes: aerospace medicine and psychology, life support systems and controlled environments, safety equipment, exobiology and extraterrestrial life, and flight crew behavior and performance

    A telescope control and scheduling system for the Gravitational-wave Optical Transient Observer

    Get PDF
    The detection of the first electromagnetic counterpart to a gravitational-wave signal in August 2017 marked the start of a new era of multi-messenger astrophysics. An unprecedented number of telescopes around the world were involved in hunting for the source of the signal, and although more gravitational-wave signals have been since detected, no further electromagnetic counterparts have been found. In this thesis, I present my work to help build a telescope dedicated to the hunt for these elusive sources: the Gravitational-wave Optical Transient Observer (GOTO). I detail the creation of the GOTO Telescope Control System, G-TeCS, which includes the software required to control multiple wide-field telescopes on a single robotic mount. G-TeCS also includes software that enables the telescope to complete a sky survey and transient alert follow-up observations completely autonomously, whilst monitoring the weather conditions and automatically fixing any hardware issues that arise. I go on to describe the routines used to determine target priorities, as well as how the all-sky survey grid is defined, how gravitational-wave and other transient alerts are received and processed, and how the optimum follow-up strategies for these events were determined. The first GOTO telescope, situated on La Palma in the Canary Islands, saw first light in June 2017. I detail the work I carried out on the site to help commission the prototype, and how the control software was developed during the commissioning phase. I also analyse the GOTO CCD cameras and optics, building a complete theoretical model of the system to confirm the performance of the prototype. Finally, I describe the results of simulations I carried out predicting the future of the GOTO project, with multiple robotic telescopes on La Palma and in Australia, and how the G-TeCS software might be modified to operate these telescopes as a single, global observatory

    The Universe at Extreme Scale: Multi-Petaflop Sky Simulation on the BG/Q

    Full text link
    Remarkable observational advances have established a compelling cross-validated model of the Universe. Yet, two key pillars of this model -- dark matter and dark energy -- remain mysterious. Sky surveys that map billions of galaxies to explore the `Dark Universe', demand a corresponding extreme-scale simulation capability; the HACC (Hybrid/Hardware Accelerated Cosmology Code) framework has been designed to deliver this level of performance now, and into the future. With its novel algorithmic structure, HACC allows flexible tuning across diverse architectures, including accelerated and multi-core systems. On the IBM BG/Q, HACC attains unprecedented scalable performance -- currently 13.94 PFlops at 69.2% of peak and 90% parallel efficiency on 1,572,864 cores with an equal number of MPI ranks, and a concurrency of 6.3 million. This level of performance was achieved at extreme problem sizes, including a benchmark run with more than 3.6 trillion particles, significantly larger than any cosmological simulation yet performed.Comment: 11 pages, 11 figures, final version of paper for talk presented at SC1
    corecore