55,460 research outputs found

    Civil aircraft advanced avionics architectures - an insight into saras avionics, present and future perspective

    Get PDF
    Traditionally, the avionics architectures being implemented are of federated nature, which means that each avionics function has its own independent, dedicated fault-tolerant computing resources. Federated architecture has great advantage of inherent fault containment and at the same time envelops a potential risk of massive use of resources resulting in increase in weight, looming, cost and maintenance as well. With the drastic advancement in the computer and software technologies, the aviation industry is gradually moving towards the use of Integrated Modular Avionics (IMA) for civil transport aircraft, potentially leading to multiple avionics functions housed in each hardware platform. Integrated Modular Avionics is the most important concept of avionics architecture for next generation aircrafts. SARAS avionics suite is purely federated with almost glass cockpit architecture complying to FAR25. The Avionics activities from the inception to execution are governed by the regulations and procedures under the review of Directorate General of Civil Aviation (DGCA). Every phase of avionics activity has got its own technically involvement to make the system perfect. In addition the flight data handling, monitoring and analysis is again a thrust area in the civil aviation industry leading to safety and reliability of the machine and the personnel involved. NAL has been in this area for more than two decades and continues to excel in these technologies

    Instrumenting self-modifying code

    Full text link
    Adding small code snippets at key points to existing code fragments is called instrumentation. It is an established technique to debug certain otherwise hard to solve faults, such as memory management issues and data races. Dynamic instrumentation can already be used to analyse code which is loaded or even generated at run time.With the advent of environments such as the Java Virtual Machine with optimizing Just-In-Time compilers, a new obstacle arises: self-modifying code. In order to instrument this kind of code correctly, one must be able to detect modifications and adapt the instrumentation code accordingly, preferably without incurring a high penalty speedwise. In this paper we propose an innovative technique that uses the hardware page protection mechanism of modern processors to detect such modifications. We also show how an instrumentor can adapt the instrumented version depending on the kind of modificiations as well as an experimental evaluation of said techniques.Comment: In M. Ronsse, K. De Bosschere (eds), proceedings of the Fifth International Workshop on Automated Debugging (AADEBUG 2003), September 2003, Ghent. cs.SE/030902

    The 10-ft. space simulator at the Jet Propulsion Laboratory

    Get PDF
    Pumping, vacuum, cryogenic, solar simulation, and supporting systems for space simulator facilit

    Coz: Finding Code that Counts with Causal Profiling

    Full text link
    Improving performance is a central concern for software developers. To locate optimization opportunities, developers rely on software profilers. However, these profilers only report where programs spent their time: optimizing that code may have no impact on performance. Past profilers thus both waste developer time and make it difficult for them to uncover significant optimization opportunities. This paper introduces causal profiling. Unlike past profiling approaches, causal profiling indicates exactly where programmers should focus their optimization efforts, and quantifies their potential impact. Causal profiling works by running performance experiments during program execution. Each experiment calculates the impact of any potential optimization by virtually speeding up code: inserting pauses that slow down all other code running concurrently. The key insight is that this slowdown has the same relative effect as running that line faster, thus "virtually" speeding it up. We present Coz, a causal profiler, which we evaluate on a range of highly-tuned applications: Memcached, SQLite, and the PARSEC benchmark suite. Coz identifies previously unknown optimization opportunities that are both significant and targeted. Guided by Coz, we improve the performance of Memcached by 9%, SQLite by 25%, and accelerate six PARSEC applications by as much as 68%; in most cases, these optimizations involve modifying under 10 lines of code.Comment: Published at SOSP 2015 (Best Paper Award

    The PLC: a logical development

    Get PDF
    Programmable Logic Controllers (PLCs) have been used to control industrial processes and equipment for over 40 years, having their first commercially recognised application in 1969. Since then there have been enormous changes in the design and application of PLCs, yet developments were evolutionary rather than radical. The flexibility of the PLC does not confine it to industrial use and it has been used for disparate non-industrial control applications . This article reviews the history, development and industrial applications of the PLC

    Context-aware adaptation in DySCAS

    Get PDF
    DySCAS is a dynamically self-configuring middleware for automotive control systems. The addition of autonomic, context-aware dynamic configuration to automotive control systems brings a potential for a wide range of benefits in terms of robustness, flexibility, upgrading etc. However, the automotive systems represent a particularly challenging domain for the deployment of autonomics concepts, having a combination of real-time performance constraints, severe resource limitations, safety-critical aspects and cost pressures. For these reasons current systems are statically configured. This paper describes the dynamic run-time configuration aspects of DySCAS and focuses on the extent to which context-aware adaptation has been achieved in DySCAS, and the ways in which the various design and implementation challenges are met

    Adaptive Real Time Imaging Synthesis Telescopes

    Full text link
    The digital revolution is transforming astronomy from a data-starved to a data-submerged science. Instruments such as the Atacama Large Millimeter Array (ALMA), the Large Synoptic Survey Telescope (LSST), and the Square Kilometer Array (SKA) will measure their accumulated data in petabytes. The capacity to produce enormous volumes of data must be matched with the computing power to process that data and produce meaningful results. In addition to handling huge data rates, we need adaptive calibration and beamforming to handle atmospheric fluctuations and radio frequency interference, and to provide a user environment which makes the full power of large telescope arrays accessible to both expert and non-expert users. Delayed calibration and analysis limit the science which can be done. To make the best use of both telescope and human resources we must reduce the burden of data reduction. Our instrumentation comprises of a flexible correlator, beam former and imager with digital signal processing closely coupled with a computing cluster. This instrumentation will be highly accessible to scientists, engineers, and students for research and development of real-time processing algorithms, and will tap into the pool of talented and innovative students and visiting scientists from engineering, computing, and astronomy backgrounds. Adaptive real-time imaging will transform radio astronomy by providing real-time feedback to observers. Calibration of the data is made in close to real time using a model of the sky brightness distribution. The derived calibration parameters are fed back into the imagers and beam formers. The regions imaged are used to update and improve the a-priori model, which becomes the final calibrated image by the time the observations are complete

    Stellar intensity interferometry over kilometer baselines: Laboratory simulation of observations with the Cherenkov Telescope Array

    Full text link
    A long-held astronomical vision is to realize diffraction-limited optical aperture synthesis over kilometer baselines. This will enable imaging of stellar surfaces and their environments, show their evolution over time, and reveal interactions of stellar winds and gas flows in binary star systems. An opportunity is now opening up with the large telescope arrays primarily erected for measuring Cherenkov light in air induced by gamma rays. With suitable software, such telescopes could be electronically connected and used also for intensity interferometry. With no optical connection between the telescopes, the error budget is set by the electronic time resolution of a few nanoseconds. Corresponding light-travel distances are on the order of one meter, making the method practically insensitive to atmospheric turbulence or optical imperfections, permitting both very long baselines and observing at short optical wavelengths. Theoretical modeling has shown how stellar surface images can be retrieved from such observations and here we report on experimental simulations. In an optical laboratory, artificial stars (single and double, round and elliptic) are observed by an array of telescopes. Using high-speed photon-counting solid-state detectors and real-time electronics, intensity fluctuations are cross correlated between up to a hundred baselines between pairs of telescopes, producing maps of the second-order spatial coherence across the interferometric Fourier-transform plane. These experiments serve to verify the concepts and to optimize the instrumentation and observing procedures for future observations with (in particular) CTA, the Cherenkov Telescope Array, aiming at order-of-magnitude improvements of the angular resolution in optical astronomy.Comment: 18 pages, 11 figures; Presented at SPIE conference on Astronomical Telescopes + Instrumentation in Montreal, Quebec, Canada, June 2014. To appear in SPIE Proc.9146, Optical and Infrared Interferometry IV (J.K.Rajagopal, M.J.Creech-Eakman, F.Malbet, eds.), 201
    corecore