56 research outputs found

    Single Photon Interferometry and Quantum Astrophysics

    Get PDF
    abstract: This thesis contains an overview, as well as the history of optical interferometers. A new approach to interferometric measurements of stars is proposed and explored. Modern updates to the classic techniques are described along with some theoretical derivations showing why the method of single photon counting shows significant promise relative to the currently used amplitude interferometry. Description of a modular intensity interferometer system using commercially available single-photon detectors is given. Calculations on the sensitivity and \emph{uv}-plane coverage using these modules mounted on existing telescopes on Kitt Peak, Arizona is presented. Determining fundamental stellar properties is essential for testing models of stellar evolution as well as for deriving physical properties of transiting exoplanets. The proposed method shows great promise in measuring the angular size of stars. Simulations indicate that it is possible to measure stellar diameters of bright stars with AB magnitude 5% in a single night of observation. Additionally, a description is given of a custom time-to-digital converter designed to time tag individual photons from multiple single-photon detectors with high count rate, continuous data logging, and low systematics. The instrument utilizes a tapped-delay line approach on an FPGA chip which allows for sub-clock resolution of <100 ps. The TDC is implemented on a Re-configurable Open Architecture Computing Hardware Revision 2 (ROACH2) board which allows for continuous data streaming and time tagging of up to 20 million events per second. The functioning prototype is currently set-up to work with up to ten independent channels. Laboratory characterization of the system, including RF, pick up and mitigation, as well as measurement of in-lab photon correlations from an incoherent light source (artificial star), are presented. Additional improvements to the TDC will also be discussed, such as improving the data transfer rate by a factor of 10 via an SDP+ Mezzanine card and PCIe 2SFP+ 10 Gb card, as well as scaling to 64 independent channels. Furthermore, a modified nulling interferometer with image inversion is proposed, for direct imaging of exoplanets below the canonical Rayleigh resolution limit. Image inversion interferometry relies on splitting incoming radiation from a source, either spatially rotating or reflecting the electric field from one arm of the interferometer before recombining the signals and detecting the resulting images in the two output ports with an array of high-speed single-photon detectors. Sources of incoming radiation that have cylindrical symmetry and are centered on the rotation axis will cancel in one of the output ports and add in the other output port. The ability to suppress light from a host star, as well as the ability to resolve past the Rayleigh limit, enables sensitive detection of exoplanets from a stable environment without the need for a coronagraph. The expected number of photons and the corresponding variance in the measurement for different initial contrast ratios are shown, with some first-order theoretical instrumental errors. Lastly, preliminary results from a sizeable photometric survey are presented. This survey is used to derive bolometric flux alongside from angular size measurements and the effective stellar temperatures.Dissertation/ThesisDoctoral Dissertation Astrophysics and Astronomy 201

    The COMPASS Experiment at CERN

    Get PDF
    The COMPASS experiment makes use of the CERN SPS high-intensitymuon and hadron beams for the investigation of the nucleon spin structure and the spectroscopy of hadrons. One or more outgoing particles are detected in coincidence with the incoming muon or hadron. A large polarized target inside a superconducting solenoid is used for the measurements with the muon beam. Outgoing particles are detected by a two-stage, large angle and large momentum range spectrometer. The setup is built using several types of tracking detectors, according to the expected incident rate, required space resolution and the solid angle to be covered. Particle identification is achieved using a RICH counter and both hadron and electromagnetic calorimeters. The setup has been successfully operated from 2002 onwards using a muon beam. Data with a hadron beam were also collected in 2004. This article describes the main features and performances of the spectrometer in 2004; a short summary of the 2006 upgrade is also given.Comment: 84 papes, 74 figure

    Research & Technology Report Goddard Space Flight Center

    Get PDF
    The main theme of this edition of the annual Research and Technology Report is Mission Operations and Data Systems. Shifting from centralized to distributed mission operations, and from human interactive operations to highly automated operations is reported. The following aspects are addressed: Mission planning and operations; TDRSS, Positioning Systems, and orbit determination; hardware and software associated with Ground System and Networks; data processing and analysis; and World Wide Web. Flight projects are described along with the achievements in space sciences and earth sciences. Spacecraft subsystems, cryogenic developments, and new tools and capabilities are also discussed

    A novel parallel algorithm for surface editing and its FPGA implementation

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophySurface modelling and editing is one of important subjects in computer graphics. Decades of research in computer graphics has been carried out on both low-level, hardware-related algorithms and high-level, abstract software. Success of computer graphics has been seen in many application areas, such as multimedia, visualisation, virtual reality and the Internet. However, the hardware realisation of OpenGL architecture based on FPGA (field programmable gate array) is beyond the scope of most of computer graphics researches. It is an uncultivated research area where the OpenGL pipeline, from hardware through the whole embedded system (ES) up to applications, is implemented in an FPGA chip. This research proposes a hybrid approach to investigating both software and hardware methods. It aims at bridging the gap between methods of software and hardware, and enhancing the overall performance for computer graphics. It consists of four parts, the construction of an FPGA-based ES, Mesa-OpenGL implementation for FPGA-based ESs, parallel processing, and a novel algorithm for surface modelling and editing. The FPGA-based ES is built up. In addition to the Nios II soft processor and DDR SDRAM memory, it consists of the LCD display device, frame buffers, video pipeline, and algorithm-specified module to support the graphics processing. Since there is no implementation of OpenGL ES available for FPGA-based ESs, a specific OpenGL implementation based on Mesa is carried out. Because of the limited FPGA resources, the implementation adopts the fixed-point arithmetic, which can offer faster computing and lower storage than the floating point arithmetic, and the accuracy satisfying the needs of 3D rendering. Moreover, the implementation includes Bézier-spline curve and surface algorithms to support surface modelling and editing. The pipelined parallelism and co-processors are used to accelerate graphics processing in this research. These two parallelism methods extend the traditional computation parallelism in fine-grained parallel tasks in the FPGA-base ESs. The novel algorithm for surface modelling and editing, called Progressive and Mixing Algorithm (PAMA), is proposed and implemented on FPGA-based ES’s. Compared with two main surface editing methods, subdivision and deformation, the PAMA can eliminate the large storage requirement and computing cost of intermediated processes. With four independent shape parameters, the PAMA can be used to model and edit freely the shape of an open or closed surface that keeps globally the zero-order geometric continuity. The PAMA can be applied independently not only FPGA-based ESs but also other platforms. With the parallel processing, small size, and low costs of computing, storage and power, the FPGA-based ES provides an effective hybrid solution to surface modelling and editing

    Architectures for a space-based information network with shared on-orbit processing

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Engineering Systems Division, 2005.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 335-343).This dissertation provides a top level assessment of technology design choices for the architecture of a space-based information network with shared on-orbit processing. Networking is an efficient method of sharing communications and lowering the cost of communications, providing better interoperability and data integration for multiple satellites. The current space communications architecture sets a critical limitation on the collection of raw data sent to the ground. By introducing powerful space-borne processing, compression of raw data can alleviate the need for expensive and expansive downlinks. Moreover, distribution of processed data directly from space sensors to the end-users may be more easily realized. A space-based information network backbone can act as the transport network for mission satellites as well as enable the concept of decoupled, shared, and perhaps distributed space-borne processing for space-based assets. Optical crosslinks are the enabling technology for creating a cost-effective network capable of supporting high data rates. In this dissertation, the space-based network backbone is designed to meet a number of mission requirements by optimizing over constellation topologies under different traffic models. With high network capacity availability, space-borne processing can be accessible by any mission satellite attached to the network. Space-borne processing capabilities can be enhanced with commercial processors that are tolerant of radiation and replenished periodically (as frequently as every two years).(cont.) Additionally, innovative ways of using a space-based information network can revolutionize satellite communications and space missions. Applications include distributed computing in space, interoperable space communications, multiplatform distributed satellite communications, coherent distributed space sensing, multisensor data fusion, and restoration of disconnected global terrestrial networks after a disaster. Lastly, the consolidation of all the different communications assets into a horizontally integrated space-based network infrastructure calls for a space-based network backbone to be designed with a generic nature. A coherent infrastructure can satisfy the goals of interoperability, flexibility, scalability, and allows the system to be evolutionary. This transformational vision of a generic space-based information network allows for growth to accommodate civilian demands, lowers the price of entry for the commercial sector, and makes way for innovation to enhance and provide additional value to military systems.by Serena Chan.Ph.D

    Technology 2003: The Fourth National Technology Transfer Conference and Exposition, volume 2

    Get PDF
    Proceedings from symposia of the Technology 2003 Conference and Exposition, Dec. 7-9, 1993, Anaheim, CA, are presented. Volume 2 features papers on artificial intelligence, CAD&E, computer hardware, computer software, information management, photonics, robotics, test and measurement, video and imaging, and virtual reality/simulation

    Hardware/software architectures for iris biometrics

    Get PDF
    Nowadays, the necessity of identifying users of facilities and services has become quite important not only to determine who accesses a system and/or service, but also to determine which privileges should be provided to each user. For achieving such identification, Biometrics is emerging as a technology that provides a high level of security, as well as being convenient and comfortable for the citizen. Most biometric systems are based on computer solutions, where the identification process is performed by servers or workstations, whose cost and processing time make them not feasible for some situations. However, Microelectronics can provide a suitable solution without the need of complex and expensive computer systems. Microelectronics is a subfield of Electronics and as the name suggests, is related to the study, development and/or manufacturing of electronic components, i.e. integrated circuits (ICs). We have focused our research in a concrete field of Microelectronics: hardware/software co-design. This technique is widely used for developing specific and high computational cost devices. Its basis relies on using both hardware and software solutions in an effective way, thus, obtaining a device faster than just a software solution, or smaller devices that use dedicated hardware developed for all the processes. The questions on how we can obtain an effective solution for Biometrics will be solved considering all the different aspects of these systems. In this Thesis, we have made two important contributions: the first one for a verification system based on ID token and secondly, a search engine used for massive recognition systems, both of them related to Iris Biometrics. The first relevant contribution is a biometric system architecture proposal based on ID tokens in a distributed system. In this contribution, we have specified some considerations to be done in the system and describe the different functionalities of the elements which form it, such as the central servers and/or the terminals. The main functionality of the terminal is just left to acquiring the initial biometric raw data, which will be transmitted under security cryptographic methods to the token, where all the biometric process will be performed. The ID token architecture is based on Hardware/software co-design. The architecture proposed, independent of the modality, divides the biometric process into hardware and software in order to achieve further performance functions, more than in the existing tokens. This partition considers not only the decrease of computational time hardware can provide, but also the reduction of area and power consumption, the increase in security levels and the effects on performance in all the design. To prove the proposal made, we have implemented an ID token based on Iris Biometrics following our premises. We have developed different modules for an iris algorithm both in hardware and software platforms to obtain results necessary for an effective combination of same. We have also studied different alternatives for solving the partition problem in the Hardware/software co-design issue, leading to results which point out tabu search as the fastest algorithm for this purpose. Finally, with all the data obtained, we have been able to obtain different architectures according to different constraints. We have presented architectures where the time is a major requirement, and we have obtained 30% less processing time than in all software solutions. Likewise, another solution has been proposed which provides less area and power consumption. When considering the performance as the most important constraint, two architectures have been presented, one which also tries to minimize the processing time and another which reduces hardware area and power consumption. In regard the security we have also shown two architectures considering time and hardware area as secondary requirements. Finally, we have presented an ultimate architecture where all these factors were considered. These architectures have allowed us to study how hardware improves the security against authentication attacks, how the performance is influenced by the lack of floating point operations in hardware modules, how hardware reduces time with software reducing the hardware area and the power consumption. The other singular contribution made is the development of a search engine for massive identification schemes, where time is a major constraint as the comparison should be performed over millions of users. We have initially proposed two implementations: following a centralized architecture, where memories are connected to the microprocessor, although the comparison is performed by a dedicated hardware co-processor, and a second approach, where we have connected the memory driver directly in the hardware coprocessor. This last architecture has showed us the importance of a correct connection between the elements used when time is a major requirement. A graphical representation of the different aspects covered in this Thesis is presented in Fig.1, where the relation between the different topics studied can be seen. The main topics, Biometrics and Hardware/Software Co-design have been studied, where several aspects of them have been described, such as the different Biometric modalities, where we have focussed on Iris Biometrics and the security related to these systems. Hardware/Software Co-design has been studied by presenting different design alternatives and by identifying the most suitable configuration for ID Tokens. All the data obtained from this analysis has allowed us to offer two main proposals: The first focuses on the development of a fast search engine device, and the second combines all the factors related to both sciences with regards ID tokens, where different aspects have been combined in its Hardware/Software Design. Both approaches have been implemented to show the feasibility of our proposal. Finally, as a result of the investigation performed and presented in this thesis, further work and conclusions can be presented as a consequence of the work developed.-----------------------------------------------------------------------------------------Actualmente la identificación usuarios para el acceso a recintos o servicios está cobrando importancia no sólo para poder permitir el acceso, sino además para asignar los correspondientes privilegios según el usuario del que se trate. La Biometría es una tecnología emergente que además de realizar estas funciones de identificación, aporta mayores niveles de seguridad que otros métodos empleados, además de resultar más cómodo para el usuario. La mayoría de los sistemas biométricos están basados en ordenadores personales o servidores, sin embargo, la Microelectrónica puede aportar soluciones adecuadas para estos sistemas, con un menor coste y complejidad. La Microelectrónica es un campo de la Electrónica, que como su nombre sugiere, se basa en el estudio, desarrollo y/o fabricación de componentes electrónicos, también denominados circuitos integrados. Hemos centrado nuestra investigación en un campo específico de la Microelectrónica llamado co-diseño hardware/software. Esta técnica se emplea en el desarrollo de dispositivos específicos que requieren un alto gasto computacional. Se basa en la división de tareas a realizar entre hardware y software, consiguiendo dispositivos más rápidos que aquellos únicamente basados en una de las dos plataformas, y más pequeños que aquellos que se basan únicamente en hardware. Las cuestiones sobre como podemos crear soluciones aplicables a la Biometría son las que intentan ser cubiertas en esta tesis. En esta tesis, hemos propuesto dos importantes contribuciones: una para aquellos sistemas de verificación que se apoyan en dispositivos de identificación y una segunda que propone el desarrollo de un sistema de búsqueda masiva. La primera aportación es la metodología para el desarrollo de un sistema distribuido basado en dispositivos de identificación. En nuestra propuesta, el sistema de identificación está formado por un proveedor central de servicios, terminales y dichos dispositivos. Los terminales propuestos únicamente tienen la función de adquirir la muestra necesaria para la identificación, ya que son los propios dispositivos quienes realizan este proceso. Los dispositivos se apoyan en una arquitectura basada en codiseño hardware/software, donde los procesos biométricos se realizan en una de las dos plataformas, independientemente de la modalidad biométrica que se trate. El reparto de tareas se realiza de tal manera que el diseñador pueda elegir que parámetros le interesa más enfatizar, y por tanto se puedan obtener distintas arquitecturas según se quiera optimizar el tiempo de procesado, el área o consumo, minimizar los errores de identificación o incluso aumentar la seguridad del sistema por medio de la implementación en hardware de aquellos módulos que sean más susceptibles a ser atacados por intrusos. Para demostrar esta propuesta, hemos implementado uno de estos dispositivos basándonos en un algoritmo de reconocimiento por iris. Hemos desarrollado todos los módulos de dicho algoritmo tanto en hardware como en software, para posteriormente realizar combinaciones de ellos, en busca de arquitecturas que cumplan ciertos requisitos. Hemos estudiado igualmente distintas alternativas para la solucionar el problema propuesto, basándonos en algoritmos genéticos, enfriamiento simulado y búsqueda tabú. Con los datos obtenidos del estudio previo y los procedentes de los módulos implementados, hemos obtenido una arquitectura que minimiza el tiempo de ejecución en un 30%, otra que reduce el área y el consumo del dispositivo, dos arquitecturas distintas que evitan la pérdida de precisión y por tanto minimizan los errores en la identificación: una que busca reducir el área al máximo posible y otra que pretende que el tiempo de procesado sea mínimo; dos arquitecturas que buscan aumentar la seguridad, minimizando ya sea el tiempo o el área y por último, una arquitectura donde todos los factores antes nombrados son considerados por igual. La segunda contribución de la tesis se refiere al desarrollo de un motor de búsqueda para identificación masiva. La premisa seguida en esta propuesta es la de minimizar el tiempo lo más posible para que los usuarios no deban esperar mucho tiempo para ser identificados. Para ello hemos propuesto dos alternativas: una arquitectura clásica donde las memorias están conectadas a un microprocesador central, el cual a su vez se comunica con un coprocesador que realiza las funciones de comparación. Una segunda alternativa, donde las memorias se conectan directamente a dicho co-procesador, evitándose el uso del microprocesador en el proceso de comparación. Ambas propuestas son comparadas y analizadas, mostrando la importancia de una correcta y apropiada conexión de los distintos elementos que forman un sistema. La Fig. 2 muestra los distintos temas tratados en esta tesis, señalando la relación existente entre ellos. Los principales temas estudiados son la Biometría y el co-diseño hardware/software, describiendo distintos aspectos de ellos, como las diferentes modalidades biométricas, centrándonos en la Biometría por iris o la seguridad relativa a estos sistemas. En el caso del co-diseño hardware/software se presenta un estado de la técnica donde se comentan diversas alternativas para el desarrollo de sistemas empotrados, el trabajo propuesto por otros autores en el ¶ambito del co-diseño y por último qué características deben cumplir los dispositivos de identificación como sistemas empotrados. Con toda esta información pasamos al desarrollo de las propuestas antes descritas y los desarrollos realizados. Finalmente, conclusiones y trabajo futuro son propuestos a raíz de la investigación realizada
    corecore