12 research outputs found

    Deep-Learning-Based Computer Vision Approach For The Segmentation Of Ball Deliveries And Tracking In Cricket

    Full text link
    There has been a significant increase in the adoption of technology in cricket recently. This trend has created the problem of duplicate work being done in similar computer vision-based research works. Our research tries to solve one of these problems by segmenting ball deliveries in a cricket broadcast using deep learning models, MobileNet and YOLO, thus enabling researchers to use our work as a dataset for their research. The output from our research can be used by cricket coaches and players to analyze ball deliveries which are played during the match. This paper presents an approach to segment and extract video shots in which only the ball is being delivered. The video shots are a series of continuous frames that make up the whole scene of the video. Object detection models are applied to reach a high level of accuracy in terms of correctly extracting video shots. The proof of concept for building large datasets of video shots for ball deliveries is proposed which paves the way for further processing on those shots for the extraction of semantics. Ball tracking in these video shots is also done using a separate RetinaNet model as a sample of the usefulness of the proposed dataset. The position on the cricket pitch where the ball lands is also extracted by tracking the ball along the y-axis. The video shot is then classified as a full-pitched, good-length or short-pitched delivery

    Efficient matching of robust features for embedded SLAM

    Get PDF
    With the development of the computer technology, people try to use the camera to obtain the visual information from environment and convert it into the digital signals. The whole process of acquiring, processing, analyzing, and understanding of visual information by computer system is then developed as a new research field - Computer Vision. More and more mobile applications are equipped with camera for vision perception, environment analysis, decision making and localization. The motion of the camera system can be estimated by comparing the current frame with the previous frame. Feature based image matching approaches detect distinctive and robust features from images, find the best match between the image pair based on the similarity of features. Because of the high efficiency, robustness and noise-resistibility, image matching based on the local point features has become a widely accepted and utilized method in the recent past. A wide range of feature detectors and feature descriptors have been proposed, the performance comparison between the most known and newly proposed feature descriptors is the purpose of this thesis. A systematic comparison program is implemented to evaluate the performance of different feature descriptors under varying image deformations. The evaluation is preformed by comparing the number of keypoints, quality measures, time consumption and position error. After evaluation on the static image pairs, a real-time application is implemented for comparing the real-time performance. The results obtained in this thesis can help to choose the most suitable feature descriptors according to the application environment

    Mitigating Backgrounds with a Novel Thin-Film Cathode in the DRIFT-IId Dark Matter Detector

    Get PDF
    The nature of dark matter, which comprises 85% of the matter density in the universe, is a major outstanding question in physics today. The standard hypothesis is that the dark matter is a new weakly interacting massive particle, which is present throughout the galaxy. These particles could interact within detectors on Earth, producing low-energy nuclear recoils. Two distinctive signatures arise from the solar motion through the galaxy. The DRIFT experiment aims to measure one of these, the directional signature that is based on the sidereal modulation of the nuclear recoil directions. Although DRIFT has demonstrated its capability for detecting this signature, it has been plagued by a large number of backgrounds that have limited its reach. The focus of this thesis is on characterizing these backgrounds and describing techniques that have essentially eliminated them. The background events in the DRIFT-IId detector are predominantly caused by alpha decays on the central cathode in which the alpha particle is completely or partially absorbed by the cathode material. This thesis describes the installation a 0.9 Όm thick aluminized-mylar cathode as a way to reduce the probability of producing these backgrounds. We study three generations of cathode (wire, thin-film, and radiologically clean thin-film) with a focus on identifying and quantifying the sources of alpha decay backgrounds, as well as their contributions to the background rate in the detector. This in-situ study is based on alpha range spectroscopy and the determination of the absolute alpha detection efficiency. The results for the final radiologically clean version of the cathode give a contamination of 3.3 ± 0.1 ppt 234U and 73 ± 2 ppb 238U, and an efficiency for rejecting an RPR from an alpha decay that is a factor 70 ± 20 higher than for the original wire cathode. Along with other background reduction measures, the thin-film cathode has reduced the observed background rate from 130/day to 1.7/day in the DRIFT experiment. The complete elimination of the remaining RPR backgrounds requires fiducialization of the detector along the drift direction. We describe two methods for doing this: one involving the detection of positive ions at the cathode, and the other using multiple species of charge carriers with variable drift speeds. With the recent successful implementation of the latter technique, the DRIFT experiment has run background-free for 46 days

    Transient Safe Operating Area (tsoa) For Esd Applications

    Get PDF
    A methodology to obtain design guidelines for gate oxide input pin protection and high voltage output pin protection in Electrostatic Discharge (ESD) time frame is developed through measurements and Technology Computer Aided Design (TCAD). A set of parameters based on transient measurements are used to define Transient Safe Operating Area (TSOA). The parameters are then used to assess effectiveness of protection devices for output and input pins. The methodology for input pins includes establishing ESD design targets under Charged Device Model (CDM) type stress in low voltage MOS inputs. The methodology for output pins includes defining ESD design targets under Human Metal Model (HMM) type stress in high voltage Laterally Diffused MOS (LDMOS) outputs. First, the assessment of standalone LDMOS robustness is performed, followed by establishment of protection design guidelines. Secondly, standalone clamp HMM robustness is evaluated and a prediction methodology for HMM type stress is developed based on standardized testing. Finally, LDMOS and protection clamp parallel protection conditions are identifie

    Resilient Peer-to-Peer Ranging using Narrowband High-Performance Software-Defined Radios for Mission-Critical Applications

    Get PDF
    There has been a growing need for resilient positioning for numerous applications of the military and emergency services that routinely conduct operations that require an uninterrupted positioning service. However, the level of resilience required for these applications is difficult to achieve using the popular navigation and positioning systems available at the time of this writing. Most of these systems are dependent on existing infrastructure to function or have certain vulnerabilities that can be too easily exploited by hostile forces. Mobile ad-hoc networks can bypass some of these prevalent issues making them an auspicious topic for positioning and navigation research and development. Such networks consist of portable devices that collaborate to form wireless communication links with one another and collectively carry out vital network functions independent of any fixed centralized infrastructure. The purpose of the research presented in this thesis is to adapt the protocols of an existing narrowband mobile ad-hoc communications system provided by Terrafix to enable range measuring for positioning. This is done by extracting transmission and reception timestamps of signals exchanged between neighbouring radios in the network with the highest precision possible. However, many aspects of the radios forming this network are generally not conducive to precise ranging, so the ranging protocols implemented need to either maneuver around these shortcomings or compensate for loss of precision caused. In particular, the narrow bandwidth of the signals that drastically reduces the resolution of symbol timing. The objective is to determine what level of accuracy and precision is possible using this radio network and whether one can justify investment for further development. Early experiments have provided a simple ranging demonstration in a benign environment, using the existing synchronization protocols, by extracting time data. The experiments have then advanced to the radio’s signal processing to adjust the synchronization protocols for maximize symbol timing precision and correct for clock drift. By implementing innovative synchronization techniques to the radio network, ranging data collected under benign conditions can exhibit a standard deviation of less than 3m. The lowest standard deviation achieved using only the existing methods of synchronization was over two orders of magnitude greater. All this is achieved in spite of the very narrow 10−20kHz bandwidth of the radio signals, which makes producing range estimates with an error less than 10−100m much more challenging compared to wider bandwidth systems. However, this figure is beholden to the relative motion of neighbouring radios in the network and how frequently range estimates need to be made. This thesis demonstrates how such a precision may be obtained and how this figure is likely to hold up when applied in conditions that are not ideal

    Athermal Phonon Sensors in Searches for Light Dark Matter

    Full text link
    In recent years, theoretical and experimental interest in dark matter (DM) candidates have shifted focus from primarily Weakly-Interacting Massive Particles (WIMPs) to an entire suite of candidates with masses from the zeV-scale to the PeV-scale to 30 solar masses. One particular recent development has been searches for light dark matter (LDM), which is typically defined as candidates with masses in the range of keV to GeV. In searches for LDM, eV-scale and below detector thresholds are needed to detect the small amount of kinetic energy that is imparted to nuclei in a recoil. One such detector technology that can be applied to LDM searches is that of Transition-Edge Sensors (TESs). Operated at cryogenic temperatures, these sensors can achieve the required thresholds, depending on the optimization of the design. In this thesis, I will motivate the evidence for DM and the various DM candidates beyond the WIMP. I will then detail the basics of TES characterization, expand and apply the concepts to an athermal phonon sensor--based Cryogenic PhotoDetector (CPD), and use this detector to carry out a search for LDM at the surface. The resulting exclusion analysis provides the most stringent limits in DM-nucleon scattering cross section (comparing to contemporary searches) for a cryogenic detector for masses from 93 to 140 MeV, showing the promise of athermal phonon sensors in future LDM searches. Furthermore, unknown excess background signals are observed in this LDM search, for which I rule out various possible sources and motivate stress-related microfractures as an intriguing explanation. Finally, I will shortly discuss the outlook of future searches for LDM for various detection channels beyond nuclear recoils.Comment: 243 pages, Ph.D. Thesis in Physics at UC Berkele

    Remote Entanglement of Trapped Atomic Ions.

    Full text link
    Since the development of quantum mechanics almost a century ago, there has been considerable controversy over the interpretations and predictions of quantum theory. Owing to the counterintuitive predictions of quantum mechanics, Einstein himself even wondered if this theory should be considered complete. While these questions have troubled many physicists over the past century, the development of the new field of quantum information science and the applications that may result from large scale quantum systems have brought many of these fundamental questions of quantum mechanics to the mainstream of not only theoretical but also experimental physics. This thesis deals with a system at the heart of these questions - the first demonstration of quantum entanglement of two individual massive particles at a distance. I describe a theoretical and experimental framework for entanglement of two particles using trapped atomic ions. Trapped ions are among the most attractive systems for scalable quantum information protocols because they can be well isolated from the environment and manipulated easily with lasers. Using our trapped ion system, I show the first explicit demonstration of quantum entanglement between matter and light using a single ion and its single emitted photon. Further, by combining two such ion-photon entangled systems, I demonstrate the entanglement of two remotely located ions. These entanglement protocols, together with the recent developments of trapped ion quantum computing, can be used to create a platform for a scalable quantum information network, and perhaps confront some of the strangest features of quantum mechanics.Ph.D.PhysicsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/55675/2/dmoehrin_1.pd

    DYNAMIC THERMAL MANAGEMENT FOR MICROPROCESSORS THROUGH TASK SCHEDULING

    Get PDF
    With continuous IC(Integrated Circuit) technology size scaling, more and more transistors are integrated in a tiny area of the processor. Microprocessors experience unprecedented high power and high temperatures on chip, which can easily violate the thermal constraint. High temperature on the chip, if not controlled, can damage or even burn the chip. There are also emerging technologies which can exacerbate the thermal condition on modern processors. For example, 3D stacking is an IC technology that stacks several die layers together, in order to shorten the communication path between the dies to improve the chip performance. This technology unfortunately increases the power density per unit volumn, and the heat from each layer needs to dissipate vertically through the same heat sink. Another example is chip multi-processor. A chip multi-processor(CMP) integrates two or more independent actual processors (called “cores”), onto a single integrated circuit die. As IC technology nodes continually scale down to 45nm and below, there is significant within-die process variation(PV) in the current and near-future CMPs. Process variation makes the cores in the chip differ in their maximum operable frequency, and the amount of leakage power they consume. This can result in the immense spatial variation of the temperatures of the cores on the same chip, which means the temperatures of some cores can be much higher than other cores. One of the most commonly used methods to constrain a CPU from overheating is hardware dynamic thermal management(HW DTM), due to the high cost and inefficiency of current mechanical cooling techniques. Dynamic voltage/frequency scaling(DVFS) is such a broad-spectrum dynamic thermal management technique that can be applied to all types of processors, so we adopt DVFS as the HW DTM method in this thesis to simplify problem discussion. DVFS lowers the CPU power consumption by reducing CPU frequency or voltage when temperature overshoots, which constrains the temperature at the price of performance loss, in terms of reduced CPU throughput, or longer execution time of the programs. This thesis mainly addresses this problem, with the goal of eliminating unnecessary hardware-level DVFS and improving chip performance. The methodology of the experiments in this thesis are based on the accurate estimation of power and temperature on the processor. The CPU power usage of different benchmarks are estimated by reading the performance counters on a real P4 chip, and measuring the activities of different CPU functional units. The jobs are then categorized into powerintensive(hot) ones and power non-intensive(cool) ones. Many combinations of the jobs with mixed power(thermal) characteristics are used to evaluate the effectiveness of the algorithms we propose. When the experiments are conducted on a single-core processor, a compact dynamic thermal model embedded in Linux kernel is used to calculate the CPU temperature. When the experiments are conducted on the CMP with 3D stacked dies, or the CMP affected by significant process variation, a thermal simulation tool well recognized in academia is used. The contribution of the thesis is that it proposes new software-level task scheduling algorithms to avoid unnecessary hardware-level DVFS. New task scheduling algorithms are proposed not only for the single-core processor, but aslo for the CMP with 3D stacked dies, and the CMP under process variation. Compared with the state-of-the-art algorithms proposed by other researchers, the new algorithms we propose all show significant performance improvement. To improve the performance of the single-core processors, which is harmed by the thermal overshoots and the HW DTMs, we propose a heuristic algorithm named ThreshHot, which judiciously schedules hot jobs before cool jobs, to make the future temperature lower. Furthermore, it always makes the temperature stay as close to the threshold as possible while not overshooting. In the CMPs with 3D stacked dies, three heuristics are proposed and combined as one algorithm. First, the vertically stacked cores are treated as a core stack. The power of jobs is balanced among the core stacks instead of the individual cores. Second, the hot jobs are moved close to the heat sink to expedite heat dissipation. Third, when the thermal emergencies happen, the most power-intensive job in a core stack is penalized in order to lower the temperature quickly. When CMPs are under significant process variation, each core on the CMP has distinct maximum frequency and leakage power. Maximizing the overall CPU throughput on all the cores is in conflict with satisfying on-chip thermal constraints imposed on each core. A maximum bipartite matching algorithm is used to solve this dilemma, to exploit the maximum performance of the chip

    Precision mass measurements for the astrophysical rp-process and electron cooling of trapped ions

    Get PDF
    Precision mass measurements of rare isotopes with decay half-lives far below one second are of importance to a variety of applications including studies of nuclear structure and nuclear astrophysics as well as tests of fundamental symmetries. The first part of this thesis discusses mass measurements of neutron-deficient gallium isotopes in direct vicinity of the proton drip line. The reported measurements of 60-63Ga were performed with the MR-TOF-MS of TRIUMF's Ion Trap for Atomic and Nuclear Science (TITAN) in Vancouver, Canada. The measurements mark the first direct mass determination of 60Ga and yield a 61Ga mass value three times more precise than the literature value from AME2020. Our 60Ga mass value constrains the location of the proton dripline in the gallium isotope chain and extends the experimentally evaluated IMME for isospin triplets up to A=60. The improved precision of the 61Ga mass has important implications for the astrophysical rapid proton capture process (rp-process). Calculations in a single-zone model demonstrate that the improved mass data substantially reduces uncertainties in the predicted light curves of Type I X-ray bursts. TITAN has demonstrated that charge breeding provides a powerful means to increase the precision and resolving power of Penning trap mass measurements of radioactive ions. However, the charge breeding process deteriorates the ion beam quality, thus mitigating the benefits associated with Penning trap mass spectrometry of highly charged ions (HCI). As a potential remedy for the beam quality loss, a cooler Penning trap has been developed in order to investigate the prospects of electron cooling the HCI prior to the mass measurement. The second part of this thesis reports exploratory studies of electron cooling of singly charged ions in this cooler Penning trap. Comparison of measured ion energy evolutions to a cooling model provides a detailed understanding of the underlying cooling dynamics. Extrapolation of the model enables the deduction of tentative estimates of the expected cooling times for radioactive HCI
    corecore