97 research outputs found

    Automated Top View Registration of Broadcast Football Videos

    Full text link
    In this paper, we propose a novel method to register football broadcast video frames on the static top view model of the playing surface. The proposed method is fully automatic in contrast to the current state of the art which requires manual initialization of point correspondences between the image and the static model. Automatic registration using existing approaches has been difficult due to the lack of sufficient point correspondences. We investigate an alternate approach exploiting the edge information from the line markings on the field. We formulate the registration problem as a nearest neighbour search over a synthetically generated dictionary of edge map and homography pairs. The synthetic dictionary generation allows us to exhaustively cover a wide variety of camera angles and positions and reduce this problem to a minimal per-frame edge map matching procedure. We show that the per-frame results can be improved in videos using an optimization framework for temporal camera stabilization. We demonstrate the efficacy of our approach by presenting extensive results on a dataset collected from matches of football World Cup 2014

    An Efficient and Robust Tuple Timestamp Hybrid Historical Relational Data Model

    Get PDF
    This paper proposes a novel, efficient and robust tuple time stamped hybrid historical relational model for dealing with temporal data. The primary goal of developing this model is to make it easier to manage historical data robustly with minimal space requirements and retrieve it more quickly and efficiently. The model's efficiency and results were revealed when it was applied to an employee database. The proposed model's performance in terms of query execution time and space requirements is compared to a single relational data model. The obtained results show that the proposed model is approximately 20% faster than the conventional single relational data model. Memory consumption results also show that the proposed model's memory cost at different frequencies is significantly reduced, which is approximately 30% less than the single relational data model for a set of queries. Because net cost is strongly related to query execution time and memory cost, the suggested model's net cost is also significantly reduced. The proposed tuple timestamp hybrid historical model acts as generic, accurate and robust model. It provides the same functionality as previous versions, as well as hybrid functionality of previously proposed models, with a significant improvement in query execution speed and memory usage. This model is effective and reliable for the use in a wide range of temporal database fields, including insurance, geographic information systems, stocks and finance (e.g. Finacle in Banking), data warehousing, scientific databases, legal case histories, and medical records

    Heat current magnification in Classical and Quantum spin networks

    Full text link
    We investigate heat current magnification due to asymmetry in the number of spins in two-branched classical and quantum spin systems. We begin by studying the classical Ising like spin models using Q2R and CCA dynamics and show that just the difference in the number of spins is not enough and some other source of asymmetry is required to observe heat current magnification. Unequal spin--spin interaction strength in the upper and lower branch is employed as a source of this asymmetry and it proves adequate for generating current magnification in both the models. Suitable physical motivation is then provided for current magnification in these systems, along with ways to control and manipulate magnification through various system parameters. We also study a five spin Quantum system with modified Heisenberg XXZ interaction and preserved magnetisation using the Redfield master equation. We show that it is possible to generate current magnification in this model by the asymmetry in the number of spins only. Our results indicate that the onset of current magnification is accompanied by a dip in the total current flowing through the system. On analysis it is revealed that this dip might occur because of the intersection of two non-degenerate energy levels for certain values of the asymmetry parameter in the modified XXZ model. We deduce that the additional degeneracy and the ergodic constraint due to fixed magnetisation in the system are the main reasons for current magnification and other atypical behaviors observed. We then use the concept of `ergotropy' to support these findings. Finally, for both the classical and quantum models, we see that current magnification is only observed when temperature gradient and intra-system interaction strength have similar order of energy.Comment: 15 pages, 12 figures, extended versio

    Tribological properties of plunging-type textured surfaces produced by modulation-assisted machining

    Get PDF
    Surface texturing technology has started to gain attention in the tribology community as a method for improving friction and lubrication ability of various mechanical components. Micro-sized depressions (e.g., grooves or dimples) on frictional surfaces act as fluid reservoirs and promote the retention of a lubricating thin film between mating components. Several fabrication techniques can be used to produce micro-dimple patterns on surfaces, but most of them show limitations when employed in practical efforts. The use of modulation-assisted machining (MAM) processes provides a cost-effective approach for creating surface textures over large areas that offers high control over the characteristic geometry of the textured surface. In this work, the effects of surface texturing and the influence of the dimensions of micro-sized depressions produced by MAM on wear performance are studied. Alloy 360 brass is mated with AISI 440C steel and studied using a ball-on-flat reciprocating configuration. Lubricated wear tests are carried out under conditions of variable frequency and normal load. The textured surfaces exhibited reduced wear under the highest frequency studied. The tribological performance of the surfaces is observed to depend on the size of the micro-dimples.The authors acknowledge financial support from the FEAD grant program at the Rochester Institute of Technology and from NSF grants CMMI 1130852 and 125481

    StaticFixer: From Static Analysis to Static Repair

    Full text link
    Static analysis tools are traditionally used to detect and flag programs that violate properties. We show that static analysis tools can also be used to perturb programs that satisfy a property to construct variants that violate the property. Using this insight we can construct paired data sets of unsafe-safe program pairs, and learn strategies to automatically repair property violations. We present a system called \sysname, which automatically repairs information flow vulnerabilities using this approach. Since information flow properties are non-local (both to check and repair), \sysname also introduces a novel domain specific language (DSL) and strategy learning algorithms for synthesizing non-local repairs. We use \sysname to synthesize strategies for repairing two types of information flow vulnerabilities, unvalidated dynamic calls and cross-site scripting, and show that \sysname successfully repairs several hundred vulnerabilities from open source {\sc JavaScript} repositories, outperforming neural baselines built using {\sc CodeT5} and {\sc Codex}. Our datasets can be downloaded from \url{http://aka.ms/StaticFixer}

    The Long-Baseline Neutrino Experiment: Exploring Fundamental Symmetries of the Universe

    Get PDF
    The preponderance of matter over antimatter in the early Universe, the dynamics of the supernova bursts that produced the heavy elements necessary for life and whether protons eventually decay --- these mysteries at the forefront of particle physics and astrophysics are key to understanding the early evolution of our Universe, its current state and its eventual fate. The Long-Baseline Neutrino Experiment (LBNE) represents an extensively developed plan for a world-class experiment dedicated to addressing these questions. LBNE is conceived around three central components: (1) a new, high-intensity neutrino source generated from a megawatt-class proton accelerator at Fermi National Accelerator Laboratory, (2) a near neutrino detector just downstream of the source, and (3) a massive liquid argon time-projection chamber deployed as a far detector deep underground at the Sanford Underground Research Facility. This facility, located at the site of the former Homestake Mine in Lead, South Dakota, is approximately 1,300 km from the neutrino source at Fermilab -- a distance (baseline) that delivers optimal sensitivity to neutrino charge-parity symmetry violation and mass ordering effects. This ambitious yet cost-effective design incorporates scalability and flexibility and can accommodate a variety of upgrades and contributions. With its exceptional combination of experimental configuration, technical capabilities, and potential for transformative discoveries, LBNE promises to be a vital facility for the field of particle physics worldwide, providing physicists from around the globe with opportunities to collaborate in a twenty to thirty year program of exciting science. In this document we provide a comprehensive overview of LBNE's scientific objectives, its place in the landscape of neutrino physics worldwide, the technologies it will incorporate and the capabilities it will possess.Comment: Major update of previous version. This is the reference document for LBNE science program and current status. Chapters 1, 3, and 9 provide a comprehensive overview of LBNE's scientific objectives, its place in the landscape of neutrino physics worldwide, the technologies it will incorporate and the capabilities it will possess. 288 pages, 116 figure

    Twelve-month observational study of children with cancer in 41 countries during the COVID-19 pandemic

    Get PDF
    Introduction Childhood cancer is a leading cause of death. It is unclear whether the COVID-19 pandemic has impacted childhood cancer mortality. In this study, we aimed to establish all-cause mortality rates for childhood cancers during the COVID-19 pandemic and determine the factors associated with mortality. Methods Prospective cohort study in 109 institutions in 41 countries. Inclusion criteria: children <18 years who were newly diagnosed with or undergoing active treatment for acute lymphoblastic leukaemia, non-Hodgkin's lymphoma, Hodgkin lymphoma, retinoblastoma, Wilms tumour, glioma, osteosarcoma, Ewing sarcoma, rhabdomyosarcoma, medulloblastoma and neuroblastoma. Of 2327 cases, 2118 patients were included in the study. The primary outcome measure was all-cause mortality at 30 days, 90 days and 12 months. Results All-cause mortality was 3.4% (n=71/2084) at 30-day follow-up, 5.7% (n=113/1969) at 90-day follow-up and 13.0% (n=206/1581) at 12-month follow-up. The median time from diagnosis to multidisciplinary team (MDT) plan was longest in low-income countries (7 days, IQR 3-11). Multivariable analysis revealed several factors associated with 12-month mortality, including low-income (OR 6.99 (95% CI 2.49 to 19.68); p<0.001), lower middle income (OR 3.32 (95% CI 1.96 to 5.61); p<0.001) and upper middle income (OR 3.49 (95% CI 2.02 to 6.03); p<0.001) country status and chemotherapy (OR 0.55 (95% CI 0.36 to 0.86); p=0.008) and immunotherapy (OR 0.27 (95% CI 0.08 to 0.91); p=0.035) within 30 days from MDT plan. Multivariable analysis revealed laboratory-confirmed SARS-CoV-2 infection (OR 5.33 (95% CI 1.19 to 23.84); p=0.029) was associated with 30-day mortality. Conclusions Children with cancer are more likely to die within 30 days if infected with SARS-CoV-2. However, timely treatment reduced odds of death. This report provides crucial information to balance the benefits of providing anticancer therapy against the risks of SARS-CoV-2 infection in children with cancer

    Robust estimation of bacterial cell count from optical density

    Get PDF
    Optical density (OD) is widely used to estimate the density of cells in liquid culture, but cannot be compared between instruments without a standardized calibration protocol and is challenging to relate to actual cell count. We address this with an interlaboratory study comparing three simple, low-cost, and highly accessible OD calibration protocols across 244 laboratories, applied to eight strains of constitutive GFP-expressing E. coli. Based on our results, we recommend calibrating OD to estimated cell count using serial dilution of silica microspheres, which produces highly precise calibration (95.5% of residuals <1.2-fold), is easily assessed for quality control, also assesses instrument effective linear range, and can be combined with fluorescence calibration to obtain units of Molecules of Equivalent Fluorescein (MEFL) per cell, allowing direct comparison and data fusion with flow cytometry measurements: in our study, fluorescence per cell measurements showed only a 1.07-fold mean difference between plate reader and flow cytometry data
    • …
    corecore