7,887 research outputs found
MilliSonic: Pushing the Limits of Acoustic Motion Tracking
Recent years have seen interest in device tracking and localization using
acoustic signals. State-of-the-art acoustic motion tracking systems however do
not achieve millimeter accuracy and require large separation between
microphones and speakers, and as a result, do not meet the requirements for
many VR/AR applications. Further, tracking multiple concurrent acoustic
transmissions from VR devices today requires sacrificing accuracy or frame
rate. We present MilliSonic, a novel system that pushes the limits of acoustic
based motion tracking. Our core contribution is a novel localization algorithm
that can provably achieve sub-millimeter 1D tracking accuracy in the presence
of multipath, while using only a single beacon with a small 4-microphone
array.Further, MilliSonic enables concurrent tracking of up to four smartphones
without reducing frame rate or accuracy. Our evaluation shows that MilliSonic
achieves 0.7mm median 1D accuracy and a 2.6mm median 3D accuracy for
smartphones, which is 5x more accurate than state-of-the-art systems.
MilliSonic enables two previously infeasible interaction applications: a) 3D
tracking of VR headsets using the smartphone as a beacon and b) fine-grained 3D
tracking for the Google Cardboard VR system using a small microphone array
Pushbroom Stereo for High-Speed Navigation in Cluttered Environments
We present a novel stereo vision algorithm that is capable of obstacle
detection on a mobile-CPU processor at 120 frames per second. Our system
performs a subset of standard block-matching stereo processing, searching only
for obstacles at a single depth. By using an onboard IMU and state-estimator,
we can recover the position of obstacles at all other depths, building and
updating a full depth-map at framerate.
Here, we describe both the algorithm and our implementation on a high-speed,
small UAV, flying at over 20 MPH (9 m/s) close to obstacles. The system
requires no external sensing or computation and is, to the best of our
knowledge, the first high-framerate stereo detection system running onboard a
small UAV
Computer-assisted polyp matching between optical colonoscopy and CT colonography: a phantom study
Potentially precancerous polyps detected with CT colonography (CTC) need to
be removed subsequently, using an optical colonoscope (OC). Due to large
colonic deformations induced by the colonoscope, even very experienced
colonoscopists find it difficult to pinpoint the exact location of the
colonoscope tip in relation to polyps reported on CTC. This can cause unduly
prolonged OC examinations that are stressful for the patient, colonoscopist and
supporting staff.
We developed a method, based on monocular 3D reconstruction from OC images,
that automatically matches polyps observed in OC with polyps reported on prior
CTC. A matching cost is computed, using rigid point-based registration between
surface point clouds extracted from both modalities. A 3D printed and painted
phantom of a 25 cm long transverse colon segment was used to validate the
method on two medium sized polyps. Results indicate that the matching cost is
smaller at the correct corresponding polyp between OC and CTC: the value is 3.9
times higher at the incorrect polyp, comparing the correct match between polyps
to the incorrect match. Furthermore, we evaluate the matching of the
reconstructed polyp from OC with other colonic endoluminal surface structures
such as haustral folds and show that there is a minimum at the correct polyp
from CTC.
Automated matching between polyps observed at OC and prior CTC would
facilitate the biopsy or removal of true-positive pathology or exclusion of
false-positive CTC findings, and would reduce colonoscopy false-negative
(missed) polyps. Ultimately, such a method might reduce healthcare costs,
patient inconvenience and discomfort.Comment: This paper was presented at the SPIE Medical Imaging 2014 conferenc
A multi-projector CAVE system with commodity hardware and gesture-based interaction
Spatially-immersive systems such as CAVEs provide users with surrounding worlds by projecting 3D models on multiple screens around the viewer. Compared to alternative immersive systems such as HMDs, CAVE systems are a powerful tool for collaborative inspection of virtual environments due to better use of peripheral vision, less sensitivity to tracking errors, and higher communication possibilities among users. Unfortunately, traditional CAVE setups require sophisticated equipment including stereo-ready projectors and tracking systems with high acquisition and maintenance costs. In this paper we present the design and construction of a passive-stereo, four-wall CAVE system based on commodity hardware. Our system works with any mix of a wide range of projector models that can be replaced independently at any time, and achieves high resolution and brightness at a minimum cost. The key ingredients of our CAVE are a self-calibration approach that guarantees continuity across the screen, as well as a gesture-based interaction approach based on a clever
combination of skeletal data from multiple Kinect sensors.Preprin
Diluting the Scalability Boundaries: Exploring the Use of Disaggregated Architectures for High-Level Network Data Analysis
Traditional data centers are designed with a rigid architecture of
fit-for-purpose servers that provision resources beyond the average workload in
order to deal with occasional peaks of data. Heterogeneous data centers are
pushing towards more cost-efficient architectures with better resource
provisioning. In this paper we study the feasibility of using disaggregated
architectures for intensive data applications, in contrast to the monolithic
approach of server-oriented architectures. Particularly, we have tested a
proactive network analysis system in which the workload demands are highly
variable. In the context of the dReDBox disaggregated architecture, the results
show that the overhead caused by using remote memory resources is significant,
between 66\% and 80\%, but we have also observed that the memory usage is one
order of magnitude higher for the stress case with respect to average
workloads. Therefore, dimensioning memory for the worst case in conventional
systems will result in a notable waste of resources. Finally, we found that,
for the selected use case, parallelism is limited by memory. Therefore, using a
disaggregated architecture will allow for increased parallelism, which, at the
same time, will mitigate the overhead caused by remote memory.Comment: 8 pages, 6 figures, 2 tables, 32 references. Pre-print. The paper
will be presented during the IEEE International Conference on High
Performance Computing and Communications in Bangkok, Thailand. 18 - 20
December, 2017. To be published in the conference proceeding
BTeV Level 1 Vertex Trigger
BTeV is a -physics experiment that expects to begin collecting data at the
C0 interaction region of the Fermilab Tevatron in the year 2006. Its primary
goal is to achieve unprecedented levels of sensitivity in the study of CP
violation, mixing, and rare decays in and quark systems. In order to
realize this, it will employ a state-of-the-art first-level vertex trigger
(Level 1) that will look at every beam crossing to identify detached secondary
vertices that provide evidence for heavy quark decays. This talk will briefly
describe the BTeV detector and trigger, focus on the software and hardware
aspects of the Level 1 vertex trigger, and describe work currently being done
in these areas
- …