358,938 research outputs found

    Digitizing signals - a short tutorial guide

    No full text
    Converting the analogue signal, as captured from a patient, into digital format is known as digitizing, or analogue to digital conversion. This is a vital first step in for digital signal processing. The acquisition of high-quality data requires appropriate choices of system and parameters (sampling rate, anti-alias filter, amplification, number of ‘bits’). Thus tutorial aims to provide a practical guide to making these choices, and explains the underlying principles (rather than the mathematical theory and proofs) and potential pitfalls. Illustrative examples from different physiological signals are provided

    Designer cell signal processing circuits for biotechnology

    Get PDF
    Microorganisms are able to respond effectively to diverse signals from their environment and internal metabolism owing to their inherent sophisticated information processing capacity. A central aim of synthetic biology is to control and reprogramme the signal processing pathways within living cells so as to realise repurposed, beneficial applications ranging from disease diagnosis and environmental sensing to chemical bioproduction. To date most examples of synthetic biological signal processing have been built based on digital information flow, though analogue computing is being developed to cope with more complex operations and larger sets of variables. Great progress has been made in expanding the categories of characterised biological components that can be used for cellular signal manipulation, thereby allowing synthetic biologists to more rationally programme increasingly complex behaviours into living cells. Here we present a current overview of the components and strategies that exist for designer cell signal processing and decision making, discuss how these have been implemented in prototype systems for therapeutic, environmental, and industrial biotechnological applications, and examine emerging challenges in this promising field

    Signal processing for a laser-Doppler blood perfusion meter

    Get PDF
    Two signal processing methods for laser-Dopper perfusion velocimetry are presented. The methods are based on the calculation of the moments of the frequency power spectrum. The first uses Vω-filtering (ω is the frequency) with analogous electronics, the second uses signal autocorrelation with digital electronics. Comparison is made with a third instrument: a spectrum analyzer coupled to a computer, using Fourier transform tecniques. The performance of these setups (sensitivity, limit sensitivity and accuracy) are investigated. We propose a calibration standard for signal processors to be used for blood perfusion measurements. The analogous instrument proved to be the cheapest but the digital instrument had the best performance

    Digital signal processing: the impact of convergence on education, society and design flow

    Get PDF
    Design and development of real-time, memory and processor hungry digital signal processing systems has for decades been accomplished on general-purpose microprocessors. Increasing needs for high-performance DSP systems made these microprocessors unattractive for such implementations. Various attempts to improve the performance of these systems resulted in the use of dedicated digital signal processing devices like DSP processors and the former heavyweight champion of electronics design – Application Specific Integrated Circuits. The advent of RAM-based Field Programmable Gate Arrays has changed the DSP design flow. Software algorithmic designers can now take their DSP algorithms right from inception to hardware implementation, thanks to the increasing availability of software/hardware design flow or hardware/software co-design. This has led to a demand in the industry for graduates with good skills in both Electrical Engineering and Computer Science. This paper evaluates the impact of technology on DSP-based designs, hardware design languages, and how graduate/undergraduate courses have changed to suit this transition

    MPEG-1 bitstreams processing for audio content analysis

    Get PDF
    In this paper, we present the MPEG-1 Audio bitstreams processing work which our research group is involved in. This work is primarily based on the processing of the encoded bitstream, and the extraction of useful audio features for the purposes of analysis and browsing. In order to prepare for the discussion of these features, the MPEG-1 audio bitstream format is first described. The Application Interface Protocol (API) which we have been developing in C++ is then introduced, before completing the paper with a discussion on audio feature extraction

    Mapping our Universe in 3D with MITEoR

    Full text link
    Mapping our universe in 3D by imaging the redshifted 21 cm line from neutral hydrogen has the potential to overtake the cosmic microwave background as our most powerful cosmological probe, because it can map a much larger volume of our Universe, shedding new light on the epoch of reionization, inflation, dark matter, dark energy, and neutrino masses. We report on MITEoR, a pathfinder low-frequency radio interferometer whose goal is to test technologies that greatly reduce the cost of such 3D mapping for a given sensitivity. MITEoR accomplishes this by using massive baseline redundancy both to enable automated precision calibration and to cut the correlator cost scaling from N^2 to NlogN, where N is the number of antennas. The success of MITEoR with its 64 dual-polarization elements bodes well for the more ambitious HERA project, which would incorporate many identical or similar technologies using an order of magnitude more antennas, each with dramatically larger collecting area.Comment: To be published in proceedings of 2013 IEEE International Symposium on Phased Array Systems & Technolog
    • …
    corecore