7,759 research outputs found

    Personalized Cinemagraphs using Semantic Understanding and Collaborative Learning

    Full text link
    Cinemagraphs are a compelling way to convey dynamic aspects of a scene. In these media, dynamic and still elements are juxtaposed to create an artistic and narrative experience. Creating a high-quality, aesthetically pleasing cinemagraph requires isolating objects in a semantically meaningful way and then selecting good start times and looping periods for those objects to minimize visual artifacts (such a tearing). To achieve this, we present a new technique that uses object recognition and semantic segmentation as part of an optimization method to automatically create cinemagraphs from videos that are both visually appealing and semantically meaningful. Given a scene with multiple objects, there are many cinemagraphs one could create. Our method evaluates these multiple candidates and presents the best one, as determined by a model trained to predict human preferences in a collaborative way. We demonstrate the effectiveness of our approach with multiple results and a user study.Comment: To appear in ICCV 2017. Total 17 pages including the supplementary materia

    Automated brain tumour detection and segmentation using superpixel-based extremely randomized trees in FLAIR MRI

    Get PDF
    PURPOSE: We propose a fully automated method for detection and segmentation of the abnormal tissue associated with brain tumour (tumour core and oedema) from Fluid- Attenuated Inversion Recovery (FLAIR) Magnetic Resonance Imaging (MRI). METHODS: The method is based on superpixel technique and classification of each superpixel. A number of novel image features including intensity-based, Gabor textons, fractal analysis and curvatures are calculated from each superpixel within the entire brain area in FLAIR MRI to ensure a robust classification. Extremely randomized trees (ERT) classifier is compared with support vector machine (SVM) to classify each superpixel into tumour and non-tumour. RESULTS: The proposed method is evaluated on two datasets: (1) Our own clinical dataset: 19 MRI FLAIR images of patients with gliomas of grade II to IV, and (2) BRATS 2012 dataset: 30 FLAIR images with 10 low-grade and 20 high-grade gliomas. The experimental results demonstrate the high detection and segmentation performance of the proposed method using ERT classifier. For our own cohort, the average detection sensitivity, balanced error rate and the Dice overlap measure for the segmented tumour against the ground truth are 89.48 %, 6 % and 0.91, respectively, while, for the BRATS dataset, the corresponding evaluation results are 88.09 %, 6 % and 0.88, respectively. CONCLUSIONS: This provides a close match to expert delineation across all grades of glioma, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management

    An Automated System for Chromosome Analysis

    Get PDF
    The design, construction, and testing of a complete system to produce karyotypes and chromosome measurement data from human blood samples, and to provide a basis for statistical analysis of quantitative chromosome measurement data are described

    The interaction of labor markets and inflation: analysis of micro data from the International Wage Flexibility Project

    Get PDF
    Inflation can “grease” the wheels of economic adjustment in the labor market by relieving the constraint imposed by downward nominal wage rigidity, but not if there is also substantial downward real wage rigidity. At the same time, inflation can throw “sand” in the wheels of economic adjustment by degrading the value of price signals. A number of recent studies suggest that wage rigidity is much more important for business cycles and monetary policy than previously believed (see Erceg, Henderson and Levin, 2000, Smets and Wouters, 2003, and Hall, 2005). Thus, our results on how wage rigidity and other labor market imperfections vary between countries and how they are affected by the rate of inflation should be of considerable value in formulating monetary policy and conducting related research.

    A Novel Path Sampling Method for the Calculation of Rate Constants

    Full text link
    We derive a novel efficient scheme to measure the rate constant of transitions between stable states separated by high free energy barriers in a complex environment within the framework of transition path sampling. The method is based on directly and simultaneously measuring the fluxes through many phase space interfaces and increases the efficiency with at least a factor of two with respect to existing transition path sampling rate constant algorithms. The new algorithm is illustrated on the isomerization of a diatomic molecule immersed in a simple fluid.Comment: 14 pages, including 13 figures, RevTeX

    Data-Driven Intelligent Scheduling For Long Running Workloads In Large-Scale Datacenters

    Get PDF
    Cloud computing is becoming a fundamental facility of society today. Large-scale public or private cloud datacenters spreading millions of servers, as a warehouse-scale computer, are supporting most business of Fortune-500 companies and serving billions of users around the world. Unfortunately, modern industry-wide average datacenter utilization is as low as 6% to 12%. Low utilization not only negatively impacts operational and capital components of cost efficiency, but also becomes the scaling bottleneck due to the limits of electricity delivered by nearby utility. It is critical and challenge to improve multi-resource efficiency for global datacenters. Additionally, with the great commercial success of diverse big data analytics services, enterprise datacenters are evolving to host heterogeneous computation workloads including online web services, batch processing, machine learning, streaming computing, interactive query and graph computation on shared clusters. Most of them are long-running workloads that leverage long-lived containers to execute tasks. We concluded datacenter resource scheduling works over last 15 years. Most previous works are designed to maximize the cluster efficiency for short-lived tasks in batch processing system like Hadoop. They are not suitable for modern long-running workloads of Microservices, Spark, Flink, Pregel, Storm or Tensorflow like systems. It is urgent to develop new effective scheduling and resource allocation approaches to improve efficiency in large-scale enterprise datacenters. In the dissertation, we are the first of works to define and identify the problems, challenges and scenarios of scheduling and resource management for diverse long-running workloads in modern datacenter. They rely on predictive scheduling techniques to perform reservation, auto-scaling, migration or rescheduling. It forces us to pursue and explore more intelligent scheduling techniques by adequate predictive knowledges. We innovatively specify what is intelligent scheduling, what abilities are necessary towards intelligent scheduling, how to leverage intelligent scheduling to transfer NP-hard online scheduling problems to resolvable offline scheduling issues. We designed and implemented an intelligent cloud datacenter scheduler, which automatically performs resource-to-performance modeling, predictive optimal reservation estimation, QoS (interference)-aware predictive scheduling to maximize resource efficiency of multi-dimensions (CPU, Memory, Network, Disk I/O), and strictly guarantee service level agreements (SLA) for long-running workloads. Finally, we introduced a large-scale co-location techniques of executing long-running and other workloads on the shared global datacenter infrastructure of Alibaba Group. It effectively improves cluster utilization from 10% to averagely 50%. It is far more complicated beyond scheduling that involves technique evolutions of IDC, network, physical datacenter topology, storage, server hardwares, operating systems and containerization. We demonstrate its effectiveness by analysis of newest Alibaba public cluster trace in 2017. We are the first of works to reveal the global view of scenarios, challenges and status in Alibaba large-scale global datacenters by data demonstration, including big promotion events like Double 11 . Data-driven intelligent scheduling methodologies and effective infrastructure co-location techniques are critical and necessary to pursue maximized multi-resource efficiency in modern large-scale datacenter, especially for long-running workloads

    Solar Magnetic Feature Detection and Tracking for Space Weather Monitoring

    Full text link
    We present an automated system for detecting, tracking, and cataloging emerging active regions throughout their evolution and decay using SOHO Michelson Doppler Interferometer (MDI) magnetograms. The SolarMonitor Active Region Tracking (SMART) algorithm relies on consecutive image differencing to remove both quiet-Sun and transient magnetic features, and region-growing techniques to group flux concentrations into classifiable features. We determine magnetic properties such as region size, total flux, flux imbalance, flux emergence rate, Schrijver's R-value, R* (a modified version of R), and Falconer's measurement of non-potentiality. A persistence algorithm is used to associate developed active regions with emerging flux regions in previous measurements, and to track regions beyond the limb through multiple solar rotations. We find that the total number and area of magnetic regions on disk vary with the sunspot cycle. While sunspot numbers are a proxy to the solar magnetic field, SMART offers a direct diagnostic of the surface magnetic field and its variation over timescale of hours to years. SMART will form the basis of the active region extraction and tracking algorithm for the Heliophysics Integrated Observatory (HELIO)

    Guidelines for Implementing MODEM: An Open-Source, MATLAB-Based Digital Image Correlation Software

    Get PDF
    MODEM is an open-source, MATLAB-based digital image correlation (DIC) program that was developed at the University of Auckland for small-scale testing of flexible materials. Structural engineering researchers at both the University of Auckland and Cal Poly – San Luis Obispo wanted to expand the uses of the program to study the seismic response of large-scale test specimens. This guide document describes how to implement DIC using MODEM, including the hardware and software needed to run an experiment as well as data collection and post-processing procedures for the program. Additionally, this document includes a case study focusing on a DIC test program consisting of several aluminum coupons subjected to pure tension. A summary of MODEM’s output from one of these tests informs future users of the benefits and pitfalls that can occur while running DIC experiments and prepares them to use this program in their experiments. Furthermore, this work demonstrates that researchers can accurately quantify the full-field deformation of structures at a localized scale and utilize this data to corroborate traditional instrumentation like strain gages and linear potentiometers as well as to calibrate computational finite element models
    corecore