6,164 research outputs found
Automating the Surveillance of Mosquito Vectors from Trapped Specimens Using Computer Vision Techniques
Among all animals, mosquitoes are responsible for the most deaths worldwide.
Interestingly, not all types of mosquitoes spread diseases, but rather, a
select few alone are competent enough to do so. In the case of any disease
outbreak, an important first step is surveillance of vectors (i.e., those
mosquitoes capable of spreading diseases). To do this today, public health
workers lay several mosquito traps in the area of interest. Hundreds of
mosquitoes will get trapped. Naturally, among these hundreds, taxonomists have
to identify only the vectors to gauge their density. This process today is
manual, requires complex expertise/ training, and is based on visual inspection
of each trapped specimen under a microscope. It is long, stressful and
self-limiting. This paper presents an innovative solution to this problem. Our
technique assumes the presence of an embedded camera (similar to those in
smart-phones) that can take pictures of trapped mosquitoes. Our techniques
proposed here will then process these images to automatically classify the
genus and species type. Our CNN model based on Inception-ResNet V2 and Transfer
Learning yielded an overall accuracy of 80% in classifying mosquitoes when
trained on 25,867 images of 250 trapped mosquito vector specimens captured via
many smart-phone cameras. In particular, the accuracy of our model in
classifying Aedes aegypti and Anopheles stephensi mosquitoes (both of which are
deadly vectors) is amongst the highest. We present important lessons learned
and practical impact of our techniques towards the end of the paper
Spatially Aware Computing for Natural Interaction
Spatial information refers to the location of an object in a physical or digital world. Besides, it also includes the relative position of an object related to other objects around it. In this dissertation, three systems are designed and developed. All of them apply spatial information in different fields. The ultimate goal is to increase the user friendliness and efficiency in those applications by utilizing spatial information. The first system is a novel Web page data extraction application, which takes advantage of 2D spatial information to discover structured records from a Web page. The extracted information is useful to re-organize the layout of a Web page to fit mobile browsing. The second application utilizes the 3D spatial information of a mobile device within a large paper-based workspace to implement interactive paper that combines the merits of paper documents and mobile devices. This application can overlay digital information on top of a paper document based on the location of a mobile device within a workspace. The third application further integrates 3D space information with sound detection to realize an automatic camera management system. This application automatically controls multiple cameras in a conference room, and creates an engaging video by intelligently switching camera shots among meeting participants based on their activities. Evaluations have been made on all three applications, and the results are promising. In summary, this dissertation comprehensively explores the usage of spatial information in various applications to improve the usability
Calibration Challenges for Future Radio Telescopes
Instruments for radio astronomical observations have come a long way. While
the first telescopes were based on very large dishes and 2-antenna
interferometers, current instruments consist of dozens of steerable dishes,
whereas future instruments will be even larger distributed sensor arrays with a
hierarchy of phased array elements. For such arrays to provide meaningful
output (images), accurate calibration is of critical importance. Calibration
must solve for the unknown antenna gains and phases, as well as the unknown
atmospheric and ionospheric disturbances. Future telescopes will have a large
number of elements and a large field of view. In this case the parameters are
strongly direction dependent, resulting in a large number of unknown parameters
even if appropriately constrained physical or phenomenological descriptions are
used. This makes calibration a daunting parameter estimation task, that is
reviewed from a signal processing perspective in this article.Comment: 12 pages, 7 figures, 20 subfigures The title quoted in the meta-data
is the title after release / final editing
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
- …