4 research outputs found

    Diseño e implementación de un analizador multicanal para espectometría nuclear con Zynq utilizando vivado

    Get PDF
    The different applications of ionizing radiation has made this, a very significant and useful tool, in turn can be dangerous for living beings if exposed to uncontrolled doses. However, due to its characteristics, it can not be perceived by the five senses, so that to know the presence of this are required of detectors of radiation and additional devices that allow us to quantify and classify it. This is the case of the multichannel analyzer that is responsible for separating the different pulse heights that are generated in the detectors, in a determined number of channels; According to the number of bits of the analog-to-digital converter. The development or conditioning of nuclear technology has increased considerably by the demand of the applications, therefore this allows to develop systems that cover some commercial requirements cost and volume in relation to the needs of the user. The objective of the work was to design and implement an IP Core, which functions as a multichannel analyzer for nuclear spectrometry. For the IPcore design methodology, its components were created in hardware description language VHDL and packaged in the Vivado design suite, making use of resources such as the ARM processor cores that the Zynq chip contains. Also, for the first phase of the implementation, the hardware architecture was embedded in the FPGA and the application for the ARM processor was programmed in C language. For the second phase, the management, control and visualization of the results was developed a virtual instrument in the graphical platform of programming LabVIEW. The data obtained as a result of the development and implementation of IPcore were observed graphically in a histogram that forms part of the virtual instrument mentioned above.Las diferentes aplicaciones de la radiación ionizante hace de esta, una herramienta muy significativa y útil, a su vez puede ser peligrosa para los seres vivos si son expuestos a dosis no controladas. Sin embargo, por sus características, no puede ser percibida por los cinco sentidos del ser humano, de tal manera que para conocer la presencia de esta se requieren de detectores de radiación y dispositivos adicionales que permitan cuantificarla y clasificarla. Este es el caso del analizador multicanal que se encarga de separar las diferentes alturas de pulso que se generan en los detectores, en un número determinado de canales; acorde al número de bits del convertidor análogo a digital. El desarrollo o acondicionamiento de tecnología nuclear ha aumentado considerablemente por la demanda de las aplicaciones, por consiguiente esto permite desarrollar sistemas que se adecuen a las necesidades del usuario, con características como reducción en el costo y volumen de los dispositivos. El objetivo del trabajo fue diseñar e implementar un núcleo de propiedad intelectual (IPcore) el cual funciona como un analizador multicanal para espectrometría nuclear. Los componentes del IPcore fueron creados en lenguaje de descripción de hardware VHDL y empaquetados en la suite de diseño Vivado, haciendo uso de los recursos como son el núcleo de procesamiento ARM que el chip Zynq contiene. Así mismo, para la primera fase de la implementación fue embebida en la FPGA la arquitectura hardware y programada en lenguaje C la aplicación para el procesador ARM. Para la segunda fase, el manejo, control y visualización de los resultados se desarrolló un instrumento virtual en la plataforma gráfica de programación LabVIEW. Los datos obtenidos como resultado del desarrollo e implementación del IPcore fueron observados gráficamente en un histograma que forma parte del instrumento virtual antes mencionado. Además los resultados obtenidos con el analizador multicanal embebido en la FPGA, tienen una gran semejanza con los resultados de analizadores multicanal comerciales

    Hardware evolution of a digital circuit using a custom VLSI architecture

    Get PDF
    This research investigates three solutions to overcoming portability and scalability concerns in the Evolutionary Hardware (EHW) field. Firstly, the study explores if the V-FPGA—a new, portable Virtual-Reconfigurable-Circuit architecture—is a practical and viable evolution platform. Secondly, the research looks into two possible ways of making EHW systems more scalable: by optimising the system’s genetic algorithm; and by decomposing the solution circuit into smaller, evolvable sub-circuits or modules. GA optimisation is done is by: omitting a canonical GA’s crossover operator (i.e. by using an algorithm); applying evolution constraints; and optimising the fitness function. The circuit decomposition is done in order to demonstrate modular evolution. Three two-bit multiplier circuits and two sub-circuits of a simple, but real-world control circuit are evolved. The results show that the evolved multiplier circuits, when compared to a conventional multiplier, are either equal or more efficient. All the evolved circuits improve two of the four critical paths, and all are unique. Thus, it is experimentally shown that the V-FPGA is a viable hardware-platform on which hardware evolution can be implemented; and how hardware evolution is able to synthesise novel, optimised versions of conventional circuits. By comparing the and canonical GAs, the results verify that optimised GAs can find solutions quicker, and with fewer attempts. Part of the optimisation also includes a comprehensive critical-path analysis, where the findings show that the identification of dependent critical paths is vital in enhancing a GA’s efficiency. Finally, by demonstrating the modular evolution of a finite-state machine’s control circuit, it is found that although the control circuit as a whole makes use of more than double the available hardware resources on the V-FPGA and is therefore not evolvable, the evolution of each state’s sub-circuit is possible. Thus, modular evolution is shown to be a successful tool when dealing with scalability

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp
    corecore