1,028 research outputs found

    An Optofluidic Lens Biochip and an x-ray Readable Blood Pressure Microsensor: Versatile Tools for in vitro and in vivo Diagnostics.

    Full text link
    Three different microfabricated devices were presented for use in vivo and in vitro diagnostic biomedical applications: an optofluidic-lens biochip, a hand held digital imaging system and an x-ray readable blood pressure sensor for monitoring restenosis. An optofluidic biochip–termed the ‘Microfluidic-based Oil-Immersion Lens’ (mOIL) biochip were designed, fabricated and test for high-resolution imaging of various biological samples. The biochip consists of an array of high refractive index (n = 1.77) sapphire ball lenses sitting on top of an oil-filled microfluidic network of microchambers. The combination of the high optical quality lenses with the immersion oil results in a numerical aperture (NA) of 1.2 which is comparable to the high NA of oil immersion microscope objectives. The biochip can be used as an add-on-module to a stereoscope to improve the resolution from 10 microns down to 0.7 microns. It also has a scalable field of view (FOV) as the total FOV increases linearly with the number of lenses in the biochip (each lens has ~200 microns FOV). By combining the mOIL biochip with a CMOS sensor, a LED light source in 3D printed housing, a compact (40 grams, 4cmx4cmx4cm) high resolution (~0.4 microns) hand held imaging system was developed. The applicability of this system was demonstrated by counting red and white blood cells and imaging fluorescently labelled cells. In blood smear samples, blood cells, sickle cells, and malaria-infected cells were easily identified. To monitor restenosis, an x-ray readable implantable blood pressure sensor was developed. The sensor is based on the use of an x-ray absorbing liquid contained in a microchamber. The microchamber has a flexible membrane that is exposed to blood pressure. When the membrane deflects, the liquid moves into the microfluidic-gauge. The length of the microfluidic-gauge can be measured and consequently the applied pressure exerted on the diaphragm can be calculated. The prototype sensor has dimensions of 1x0.6x10mm and adequate resolution (19mmHg) to detect restenosis in coronary artery stents from a standard chest x-ray. Further improvements of our prototype will open up the possibility of measuring pressure drop in a coronary artery stent in a non-invasively manner.PhDMacromolecular Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/111384/1/toning_1.pd

    Real-Time 3-D Environment Capture Systems

    Get PDF

    A cost-effective, mobile platform-based, photogrammetric approach for continuous structural deformation monitoring

    Get PDF
    PhD ThesisWith the evolution of construction techniques and materials technology, the design of modern civil engineering infrastructure has become increasingly advanced and complex. In parallel to this, the development and application of appropriate and efficient monitoring technologies has become essential. Improvement in the performance of structural monitoring systems, reduction of labour and total implementation costs have therefore become important issues that scientists and engineers are committed to solving. In this research, a non-intrusive structural monitoring system was developed based on close-range photogrammetric principles. This research aimed to combine the merits of photogrammetry and latest mobile phone technology to propose a cost-effective, compact (portable) and precise solution for structural monitoring applications. By combining the use of low-cost imaging devices (two or more mobile phone handsets) with in-house control software, a monitoring project can be undertaken within a relatively low budget when compared to conventional methods. The system uses programmable smart phones (Google Android v.2.2 OS) to replace conventional in-situ photogrammetric imaging stations. The developed software suite is able to control multiple handsets to continuously capture high-quality, synchronized image sequences for short or long-term structural monitoring purposes. The operations are fully automatic and the system can be remotely controlled, exempting the operator from having to attend the site, and thus saving considerable labour expense in long-term monitoring tasks. In order to prevent the system from crashing during a long-term monitoring scheme, an automatic system state monitoring program and a system recovery module were developed to enhance the stability. In considering that the image resolution for current mobile phone cameras is relatively low (in comparison to contemporary digital SLR cameras), a target detection algorithm was developed for the mobile platform that, when combined with dedicated target patterns, was found to improve the quality of photogrammetric target measurement. Comparing the photogrammetric results with physical measurements, which were measured using a Zeiss P3 analytical plotter, the returned accuracy achieved was 1/67,000. The feasibility of the system has been proven through the implementation of an indoor simulation test and an outdoor experiment. In terms of using this system for actual structural monitoring applications, the optimal relative accuracy of distance measurement was determined to be approximately 1/28,000 under laboratory conditions, and the outdoor experiment returned a relative accuracy of approximately 1/16,400

    Machine Vision: Approaches and Limitations

    Get PDF

    Applications of Silicon Retinas: from Neuroscience to Computer Vision

    Full text link
    Traditional visual sensor technology is firmly rooted in the concept of sequences of image frames. The sequence of stroboscopic images in these "frame cameras" is very different compared to the information running from the retina to the visual cortex. While conventional cameras have improved in the direction of smaller pixels and higher frame rates, the basics of image acquisition have remained the same. Event-based vision sensors were originally known as "silicon retinas" but are now widely called "event cameras." They are a new type of vision sensors that take inspiration from the mechanisms developed by nature for the mammalian retina and suggest a different way of perceiving the world. As in the neural system, the sensed information is encoded in a train of spikes, or so-called events, comparable to the action potential generated in the nerve. Event-based sensors produce sparse and asynchronous output that represents in- formative changes in the scene. These sensors have advantages in terms of fast response, low latency, high dynamic range, and sparse output. All these char- acteristics are appealing for computer vision and robotic applications, increasing the interest in this kind of sensor. However, since the sensor’s output is very dif- ferent, algorithms applied for frames need to be rethought and re-adapted. This thesis focuses on several applications of event cameras in scientific scenarios. It aims to identify where they can make the difference compared to frame cam- eras. The presented applications use the Dynamic Vision Sensor (event camera developed by the Sensors Group of the Institute of Neuroinformatics, University of Zurich and ETH). To explore some applications in more extreme situations, the first chapters of the thesis focus on the characterization of several advanced versions of the standard DVS. The low light condition represents a challenging situation for every vision sensor. Taking inspiration from standard Complementary Metal Oxide Semiconductor (CMOS) technology, the DVS pixel performances in a low light scenario can be improved, increasing sensitivity and quantum efficiency, by using back-side illumination. This thesis characterizes the so-called Back Side Illumination DAVIS (BSI DAVIS) camera and shows results from its application in calcium imaging of neural activity. The BSI DAVIS has shown better performance in the low light scene due to its high Quantum Efficiency (QE) of 93% and proved to be the best type of technology for microscopy application. The BSI DAVIS allows detecting fast dynamic changes in neural fluorescent imaging using the green fluorescent calcium indicator GCaMP6f. Event camera advances have pushed the exploration of event-based cameras in computer vision tasks. Chapters of this thesis focus on two of the most active research areas in computer vision: human pose estimation and hand gesture classification. Both chapters report the datasets collected to achieve the task, fulfilling the continuous need for data for this kind of new technology. The Dynamic Vision Sensor Human Pose dataset (DHP19) is an extensive collection of 33 whole-body human actions from 17 subjects. The chapter presents the first benchmark neural network model for 3D pose estimation using DHP19. The network archives a mean error of less than 8 mm in the 3D space, which is comparable with frame-based Human Pose Estimation (HPE) methods using frames. The gesture classification chapter reports an application running on a mobile device and explores future developments in the direction of embedded portable low power devices for online processing. The sparse output from the sensor suggests using a small model with a reduced number of parameters and low power consumption. The thesis also describes pilot results from two other scientific imaging applica- tions for raindrop size measurement and laser speckle analysis presented in the appendices

    Advance Intelligent Video Surveillance System (AIVSS): A Future Aspect

    Get PDF
    Over the last few decades, remarkable infrastructure growths have been noticed in security-related issues throughout the world. So, with increased demand for Security, Video-based Surveillance has become an important area for the research. An Intelligent Video Surveillance system basically censored the performance, happenings, or changing information usually in terms of human beings, vehicles or any other objects from a distance by means of some electronic equipment (usually digital camera). The scopes like prevention, detection, and intervention which have led to the development of real and consistent video surveillance systems are capable of intelligent video processing competencies. In broad terms, advanced video-based surveillance could be described as an intelligent video processing technique designed to assist security personnel’s by providing reliable real-time alerts and to support efficient video analysis for forensic investigations. This chapter deals with the various requirements for designing a robust and reliable video surveillance system. Also, it is discussed the different types of cameras required in different environmental conditions such as indoor and outdoor surveillance. Different modeling schemes are required for designing of efficient surveillance system under various illumination conditions

    Fundamentals of Underwater Vehicle Hardware and Their Applications

    Get PDF
    • …
    corecore