31 research outputs found

    Fault tolerant design implementation on radiation hardened by design SRAM-Based FPGAs

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2013.This electronic version was submitted and approved by the author's academic department as part of an electronic thesis pilot project. The certified thesis is available in the Institute Archives and Special Collections."June 2013." Cataloged from department-submitted PDF version of thesisIncludes bibliographical references (p. 197-204).SRAM-based FPGAs are highly attractive for space applications due to their in-flight reconfigurability, decreased development time and cost, and increased design and testing flexibility. The Xilinx Virtex-5QV is the first commercially available Radiation Hardened By Design (RHBD) SRAM-based FPGA; however, not all of its internal components are hardened against radiation-induced errors. This thesis examines and quantifies the additional considerations and techniques designers should employ with a RHBD SRAM-based FPGA in a space-based processing system to achieve high operational reliability. Additionally, this work presents the application of some of these techniques to the embedded avionics design of the REXIS imaging payload on the OSIRIS-REx asteroid sample return mission.by Frank Hall Schmidt, Jr.S.M

    Integration and design for the ALICE ITS readout chain

    Get PDF
    ALICE and its Inner Tracking System detector in the LHC at CERN will undergo a major upgrade during its second long shutdown taking place in 2019 and 2020. New silicon sensors called ALPIDE will be utilized, which requires a new readout electronics system due to different interfaces and higher demands regarding band- width and latency. The readout chain consists of staves containing the sensors, a layer of readout unit boards gathering data from and controlling the sensor staves, and a layer of common readout units multiplexing and compressing data from the readout units before forwarding it to the O² data center system. As part of the ALICE collaboration, The University of Bergen is in charge of the development of several components in this readout chain. All development sites for the readout electronics should have the readout chain in place so that design and integration tasks can be done locally. As part of the work in this thesis, the ITS readout chain is integrated and tested. The working readout chain is then used to develop various control communication interfaces along the chain, such as an I²C interface for an auxiliary FPGA on the readout unit, and a high-speed interface for uploading data to the flash memory on the readout unit.Masteroppgave i fysikkMAMN-PHYSPHYS39

    Next Generation Data Processing for Future X-ray Observatories

    Get PDF
    During the last two decades space-based X-ray observatories have been used to study the most energetic sources in the Universe and to investigate the physics and characteristics of matter under extreme conditions. Detectors sensitive to X-ray photons have been significantly improved, their area has increased, their energy resolution has reached the Fano limit, and the detector deadtime has been reduced to nanoseconds. These technological advancements have enabled X-ray observations with very high time resolution. Concepts of instruments capable of producing spectra or single events with high time resolution are the High Time Resolution Spectrometer (HTRS) aboard the International X-Ray Observatory (IXO) and the Large Area Detector (LAD) aboard the Large Observatory for X-ray Timing (LOFT). While the detectors of these instruments can detect every single photon coming from an X-ray source, the subsequent data processing electronics have to deal with unprecedented bandwidths. Depending on the brightness of the source and the available telemetry bandwidth from the satellite to the ground station the need for data compression and even reduction arises. This talk presents the work that was done in the context of the development of two instruments. One part is the development of the Data Processing Unit (DPU) for the HTRS aboard IXO that includes a LEON3 microprocessor model in a Spartan 3 FPGA used for data compression. The second part is the definition of several components for the data handling chain of the LAD instrument aboard LOFT including a specialized interface and in particular the design and construction of a prototype for the Panel-Back-End-Electronics (PBEE)

    Program Annual Technology Report: Physics of the Cosmos Program Office

    Get PDF
    From ancient times, humans have looked up at the night sky and wondered: Are we alone? How did the universe come to be? How does the universe work? PCOS focuses on that last question. Scientists investigating this broad theme use the universe as their laboratory, investigating its fundamental laws and properties. They test Einsteins General Theory of Relativity to see if our current understanding of space-time is borne out by observations. They examine the behavior of the most extreme environments supermassive black holes, active galactic nuclei, and others and the farthest reaches of the universe, to expand our understanding. With instruments sensitive across the spectrum, from radio, through infrared (IR), visible light, ultraviolet (UV), to X rays and gamma rays, as well as gravitational waves (GWs), they peer across billions of light-years, observing echoes of events that occurred instants after the Big Bang. Last year, the LISA Pathfinder (LPF) mission exceeded expectations in proving the maturity of technologies needed for the Laser Interferometer Space Antenna (LISA) mission, and the Laser Interferometer Gravitational-Wave Observatory (LIGO) recorded the first direct measurements of long-theorized GWs. Another surprising recent discovery is that the universe is expanding at an ever-accelerating rate, the first hint of so-called dark energy, estimated to account for 75% of mass-energy in the universe. Dark matter, so called because we can only observe its effects on regular matter, is thought to account for another20%, leaving only 5% for regular matter and energy. Scientists now also search for special polarization in the cosmic microwave background to support the notion that in the split-second after the Big Bang, the universe inflated faster than the speed of light! The most exciting aspect of this grand enterprise today is the extraordinary rate at which we can harness technologies to enable these key discoveries

    Risk-Informed Safety Assurance and Probabilistic Assessment of Mission-Critical Software-Intensive Systems

    Get PDF
    This report validates and documents the detailed features and practical application of the framework for software intensive digital systems risk assessment and risk-informed safety assurance presented in the NASA PRA Procedures Guide for Managers and Practitioner. This framework, called herein the "Context-based Software Risk Model" (CSRM), enables the assessment of the contribution of software and software-intensive digital systems to overall system risk, in a manner which is entirely compatible and integrated with the format of a "standard" Probabilistic Risk Assessment (PRA), as currently documented and applied for NASA missions and applications. The CSRM also provides a risk-informed path and criteria for conducting organized and systematic digital system and software testing so that, within this risk-informed paradigm, the achievement of a quantitatively defined level of safety and mission success assurance may be targeted and demonstrated. The framework is based on the concept of context-dependent software risk scenarios and on the modeling of such scenarios via the use of traditional PRA techniques - i.e., event trees and fault trees - in combination with more advanced modeling devices such as the Dynamic Flowgraph Methodology (DFM) or other dynamic logic-modeling representations. The scenarios can be synthesized and quantified in a conditional logic and probabilistic formulation. The application of the CSRM method documented in this report refers to the MiniAERCam system designed and developed by the NASA Johnson Space Center

    Physics of the Cosmos (PCOS) Program Technology Development 2018

    Get PDF
    We present a final report on our program to raise the Technology Readiness Level (TRL) of enhanced chargecoupleddevice (CCD) detectors capable of meeting the requirements of Xray grating spectrometers (XGS) and widefield Xray imaging instruments for small, medium, and large missions. Because they are made of silicon, all Xray CCDs require blocking filters to prevent corruption of the Xray signal by outofband, mainly optical and nearinfrared (nearIR) radiation. Our primary objective is to demonstrate technology that can replace the fragile, extremely thin, freestanding blocking filter that has been standard practice with a much more robust filter deposited directly on the detector surface. Highperformance, backilluminated CCDs have flown with freestanding filters (e.g., one of our detectors on Suzaku), and other relatively lowperformance CCDs with directly deposited filters have flown (e.g., on the Xray Multimirror MissionNewton, XMMNewton Reflection Grating Spectrometer, RGS). At the inception of our program, a highperformance, backilluminated CCD with a directly deposited filter has not been demonstrated. Our effort will be the first to show such a filter can be deposited on an Xray CCD that meets the requirements of a variety of contemplated future instruments. Our principal results are as follows: i) we have demonstrated a process for direct deposition of aluminum optical blocking filters on backilluminated MIT Lincoln Laboratory CCDs. Filters ranging in thickness from 70 nm to 220 nm exhibit expected bulk visibleband and Xray transmission properties except in a small number (affecting 1% of detector area) of isolated detector pixels ("pinholes"), which show higherthanexpected visibleband transmission; ii) these filters produce no measurable degradation in softXray spectral resolution, demonstrating that direct filter deposition is compatible with the MIT Lincoln Laboratory backillumination process; iii) we have shown that under sufficiently intense visible and nearIR illumination, outofband light can enter the detector through its sidewalls and mounting surfaces, compromising detector performance. This 'sidewall leakage' has been observed, for example, by a previous experiment on the International Space Station during its orbitday operations. We have developed effective countermeasures for this sidewall leakage; iv) we developed an exceptionally productive collaboration with the Regolith Xray Imaging Spectrometer (REXIS) team. REXIS is a student instrument now flying on the Origins Spectral Interpretation Resource Identification Security - Regolith Explorer (OSIRISREx) mission. REXIS students participated in our filter development program, adopted our technology for their flight instrument, and raised the TRL of this technology beyond our initial goals. This Strategic Astrophysics Technology (SAT) project, a collaboration between the MKI and MIT Lincoln Laboratory, began July 1, 2012, and ended on June 30, 2018

    Doctor of Philosophy

    Get PDF
    dissertationA fundamental characteristic of wireless communications is in their broadcast nature, which allows accessibility of information without placing restrictions on a user’s location. However, the ease of accessibility also makes it vulnerable to eavesdropping. This dissertation considers the security issues of spread spectrum systems and in this context, a secure information transmission system compromised of two main parts is presented. The first component makes use of the principle of reciprocity in frequency-selective wireless channels to derive a pair of keys for two legitimate parties. The proposed key generation algorithm allows for two asynchronous transceivers to derive a pair of similar keys. Moreover, a unique augmentation - called strongest path cancellation (SPC) - is applied to the keys and has been validated through simulation and real-world measurements to significantly boost the security level of the design. In the second part of the secure information transmission system, the concept of artificial noise is introduced to multicarrier spread spectrum systems. The keys generated in the first part of the protocol are used as spreading code sequences for the spread spectrum system. Artificial noise is added to further enhance the security of the communication setup. Two different attacks on the proposed security system are evaluated. First, a passive adversary following the same steps as the legitimate users to detect confidential information is considered. The second attack studies a more sophisticated adversary with significant blind detection capabilities

    Multiphysics Simulation and Model-based System Testing of Automotive E-Powertrains

    Get PDF
    Programa Oficial de Doutoramento en Enxeñaría Naval e Industrial . 5015V01[Abstract] Model-Based System Testing emerges as a new paradigm for the development cycle that is currently gaining momentum, especially in the automotive industry. This novel approach is focused on combining computer simulation and real experimentation to shift the bulk of problem detection and redesign tasks towards the early stages of the developments. Along these lines, Model-Based System Testing is aimed at decreasing the amount of resources invested in these tasks and enabling the early identification of design flaws and operation problems before a full-vehicle prototype is available. The use of Model-Based System Testing, however, requires to implement some critical technologies, three of which will be discussed in this thesis. The first task addressed in this thesis is the design of a multiplatform framework to assess the description and resolution of the equations of motion of virtual models used in simulation. This framework enables the efficiency evaluation of different modelling and solution methods and implementations. In Model-Based System Testing contexts virtual models interact with physical components, therefore it is mandatory to guarantee their real-time capabilities, regardless of the software or hardware implementations. Second, estimation techniques based on Kalman Filters are of interest in Model- Based System Testing applications to evaluate parameters, inputs or states of a virtual model of a given system. These procedures can be combined with the use of Digital Twins, virtual counterparts of real systems, with which they exchange information in a two-way communication. The available measurements from the sensors located at a physical system can be fused with the results obtained from the simulation of the virtual model. Thus, this avenue improves the knowledge of the magnitudes that cannot be measured directly by these sensors. In turn, the outcomes obtained from the simulation of the virtual model could serve to make decisions and apply corrective actions onto the physical system. Third, co-simulation techniques are necessary when a system is split into several subsystems that are coordinated through the exchange of a reduced set of variables at discrete points in time. This is the case with a majority of Model-Based System Testing applications, in which physical and virtual components are coupled through a discrete-time communication gateway. The resulting cyber-physical applications are essentially an example of real-time co-simulation, in which all the subsystems need to achieve real-time performance. Due to the presence of physical components, which cannot iterate over their integration steps, explicit schemes are often mandatory. These, however, introduce errors associated with the inherent delays of a discrete communication interface. These errors can render co-simulation results inaccurate and even unstable unless they are eliminated. This thesis will address this correction by means of an energy-based procedure that considers the power exchange between subsystems. This research work concludes with an example of a cyber-physical application, in which real components are interfaced to a virtual environment, which requires the application of all the MBST technologies addressed in this thesis.[Resumen] Los ensayos de sistemas basados en modelos emergen como un nuevo paradigma de desarrollo que actualmente está ganando popularidad, especialmente en la industria automotriz. Este nuevo enfoque se centra en combinar la simulación por ordenador con la experimentación para desplazar la mayor parte de la detección de problemas y rediseños hacia las fases tempranas del desarrollo. De esta forma, los ensayos de sistemas basados en modelos se centran en disminuir la cantidad de recursos invertidos en estas tareas y habilitar la identificación temprana de errores de diseño y problemas durante la operación, incluso antes de que los prototipos del vehículo completo estén disponibles. Sin embargo, el uso de esta estrategia requiere implementar algunas tecnologías críticas, tres de las cuales serán tratadas en esta tesis. La primera tarea abordada en esta tesis es el diseño de un entorno multiplataforma para evaluar la descripción y resolución de las ecuaciones de la dinámica de los modelos virtuales usados en las simulaciones. Este marco permite una evaluación eficiente de las diferentes formas de modelar los sistemas y de los métodos de resolución e implementación. En este contexto de ensayos basados en modelos, los sistemas virtuales interactúan con los componentes de los sistemas físicos, por lo tanto es necesario garantizar sus capacidades de ejecución en tiempo real, independientemente de la plataforma de software y hardware utilizada. En segundo lugar, las técnicas de estimación basadas en filtros de Kalman son de gran interés en las aplicaciones que usan ensayos basados en modelos para evaluar los parámetros, entradas o estados de los modelos virtuales de un sistema dado. Estos procedimientos se pueden combinar con el uso de gemelos digitales, homólogos virtuales de un sistema físico, con el cual mantienen un flujo bidireccional de intercambio de información. Las medidas disponibles procedentes de los sensores instalados en un sistema físico se pueden combinar con los resultados obtenidos de la simulación del sistema virtual. De este modo, este enfoque mejora el conocimiento de las magnitudes que no pueden ser medidas directamente por los sensores. A su vez, los resultados de la simulación de los sistemas de los modelos virtuales pueden servir para tomar decisiones y aplicar medidas correctivas al sistema real. En tercer lugar, las técnicas de co-simulación son necesarias cuando un sistema se divide en varios subsistemas, coordinados a través del intercambio de un reducido número de variables en momentos puntuales. Este es el caso de la mayor parte de las aplicaciones que siguen la estrategia de ensayos basados en modelos, en los cuales los componentes físicos y virtuales se acoplan mediante una comunicación en tiempo discreto. Como resultado las aplicaciones ciberfísicas son en esencia un ejemplo de co-simulación en tiempo real, en la que todos los subsistemas necesitan cumplir los requisitos de ejecución en tiempo real. Debido a la presencia de componentes físicos, que no pueden reiterar sus pasos de integración, el uso de esquemas explícitos es frecuentemente necesario. Sin embargo, estos esquemas introducen errores asociados con los retrasos propios de una interfaz de tiempo discreto. Estos errores pueden dar lugar a resultados erróneos e incluso inestabilizar la co-simulación, si no son eliminados. Esta tesis aborda la corrección de la co-simulación a través de métodos energéticos basados en la potencia intercambiada por los subsistemas. Este trabajo de investigación concluye con un ejemplo de aplicación ciberfísica, en la que se conectan componentes reales a una simulación por ordenador. Esta aplicación requiere la aplicación de las tecnologías de ensayos basados en modelos presentadas a lo largo de esta tesis.[Resumo] Os ensaios de sistemas baseados en modelos xorden como un novo paradigma de desenvolvemento que actualmente está gañando popularidade, especialmente na industria automotriz. Este novo enfoque céntrase en combinar a simulación por ordenador coa experimentación para desprazar a maior parte da detección de problemas e redeseños cara as fases iniciais do ciclo de produto. Deste xeito, os ensaios de sistemas baseados en modelos fundaméntanse en diminuír a cantidade de recursos investidos nestas tarefas e habilitar a identificación temperá de erros de deseño e problemas durante a operación, aínda se os prototipos do vehículo completo non están dispoñibeis. Porén, o uso desta estratexia require implementar algunhas tecnoloxías críicas, tres das cales serán tratadas nesta tese. A primeira tarefa tratada nesta tese é o deseño dun entorno multiplataforma para avaliar a descripción e resolución das ecuacións da dinámica dos modelos virtuais empregados nas simulacións. Este entorno permite unha avaluación eficiente dos diferentes xeitos de modelar os sistemas e dos métodos de resolución e implementación. Neste contexto de ensaios baseados en modelos, os sistemas virtuais interactúan cos compoñentes dos sistemas físicos, polo tanto é necesario garantir as súas capacidades de execución en tempo real, independentemente da plataforma de hardware e software escollida. En segundo lugar, as técnicas de estimación baseadas en filtros de Kalman son de grande interese nas aplicacións que usan ensaios baseados en modelos para avaliar os seus parámetros, entradas ou estados dos modelos virtuais dun certo sistema. Estes procedementos pódense combinar co uso de xemelgos dixitais, homólogos virtuais dun sistema físico, co cal manteñen un fluxo bidireccional de intercambio de información. As medidas dispoñíbeis procedentes dos sensores instalados nun sistema físico pódense combinar cos resultados obtidos da simulación do sistema virtual. Deste xeito, este enfoque mellora o coñecemento das magnitudes que non poden ser medidas directamente polos sensores. Á súa vez, os resultados da simulación dos sistemas dos modelos virtuais poden servir para tomar decisións e aplicar medidas correctivas ao sistema real. En terceiro lugar, as técnicas de co-simulación son necesarias cando un sistema é dividido en varios subsistemas, coordinados a través do intercambio dun reducido número de variables en momentos puntuais. Este é o caso da maior parte das aplicacións que seguen a estratexia de ensaios baseados en modelos, nos cales os compoñentes físicos e virtuais se acoplan mediante unha comunicación en tempo discreto. Como resultado as aplicacións ciberfísicas son esencialmente un exemplo de co-simulación en tempo real, na que tódolos subsistemas necesitan cumprir os requisitos de execución en tempo real. Debido á presenza de compoñentes físicos, que non poden reiterar os seus pasos de integración, o uso de esquemas explícitos é polo xeral necesario. Con todo, estes esquemas introducen erros asociados cos atrasos derivados dunha interface de tempo discreto. Estes erros poden provocar resultados incorrectos e incluso inestabilizar a co-simulación, de non seren eliminados. Esta tese aborda a corrección da co-simulación a través de métodos enerxéticos baseados na potencia intercambiada polos subsistemas. Este traballo conclúe cun exemplo de aplicación ciberfísica, na que os compoñentes reais son conectados a un entorno virtual. Isto require o emprego de tódalas tecnoloxías de ensaios baseadas en modelos presentadas ao longo desta tese

    Artificial general intelligence: Proceedings of the Second Conference on Artificial General Intelligence, AGI 2009, Arlington, Virginia, USA, March 6-9, 2009

    Get PDF
    Artificial General Intelligence (AGI) research focuses on the original and ultimate goal of AI – to create broad human-like and transhuman intelligence, by exploring all available paths, including theoretical and experimental computer science, cognitive science, neuroscience, and innovative interdisciplinary methodologies. Due to the difficulty of this task, for the last few decades the majority of AI researchers have focused on what has been called narrow AI – the production of AI systems displaying intelligence regarding specific, highly constrained tasks. In recent years, however, more and more researchers have recognized the necessity – and feasibility – of returning to the original goals of the field. Increasingly, there is a call for a transition back to confronting the more difficult issues of human level intelligence and more broadly artificial general intelligence
    corecore