147 research outputs found

    Preparation and characterization of polycaprolactone microspheres by electrospraying

    Get PDF
    This is the author accepted manuscript. Published online: 13 Sep 2016. The final version to be made available from the publisher via the DOI in this record.The ability to reproducibly produce and effectively collect electrosprayed polymeric microspheres with controlled morphology and size in bulk form is challenging. In this study, microparticles were produced by electrospraying polycaprolactone (PCL) of various molecular weights and solution concentrations in chloroform, and by collecting materials on different substrates. The resultant PCL microparticles were characterized by optical and electron microscopy to investigate the effect of molecular weight, solution concentration, applied voltage, working distance and flow rate on their morphology and size. The work demonstrates the key role of a moderate molecular weight and/or solution concentration in the formation of spherical PCL particles via an electrospraying process. Increasing the applied voltage was found to produce smaller and more uniform PCL microparticles. There was a relatively low increase in the particle average size with an increase in the working distance and flow rate. Four types of substrates were adopted to collect electrosprayed PCL particles: a glass slide, aluminium foil, liquid bath and copper wire. Unlike 2D bulk structures collected on the other substrates, a 3D tubular structure of microspheres was formed on the copper wire and could find application in the construction of 3D tumour mimics.The financial support received from the Cancer Research UK (CRUK) and Engineering and Physical Sciences Research Council (ESPRC) Cancer Imaging Centre in Cambridge and Manchester (C8742/A18097) is acknowledged

    Fast and Accurate Algorithm for Eye Localization for Gaze Tracking in Low Resolution Images

    Full text link
    Iris centre localization in low-resolution visible images is a challenging problem in computer vision community due to noise, shadows, occlusions, pose variations, eye blinks, etc. This paper proposes an efficient method for determining iris centre in low-resolution images in the visible spectrum. Even low-cost consumer-grade webcams can be used for gaze tracking without any additional hardware. A two-stage algorithm is proposed for iris centre localization. The proposed method uses geometrical characteristics of the eye. In the first stage, a fast convolution based approach is used for obtaining the coarse location of iris centre (IC). The IC location is further refined in the second stage using boundary tracing and ellipse fitting. The algorithm has been evaluated in public databases like BioID, Gi4E and is found to outperform the state of the art methods.Comment: 12 pages, 10 figures, IET Computer Vision, 201

    Face analysis using curve edge maps

    Get PDF
    This paper proposes an automatic and real-time system for face analysis, usable in visual communication applications. In this approach, faces are represented with Curve Edge Maps, which are collections of polynomial segments with a convex region. The segments are extracted from edge pixels using an adaptive incremental linear-time fitting algorithm, which is based on constructive polynomial fitting. The face analysis system considers face tracking, face recognition and facial feature detection, using Curve Edge Maps driven by histograms of intensities and histograms of relative positions. When applied to different face databases and video sequences, the average face recognition rate is 95.51%, the average facial feature detection rate is 91.92% and the accuracy in location of the facial features is 2.18% in terms of the size of the face, which is comparable with or better than the results in literature. However, our method has the advantages of simplicity, real-time performance and extensibility to the different aspects of face analysis, such as recognition of facial expressions and talking

    In-the-wild Facial Expression Recognition in Extreme Poses

    Full text link
    In the computer research area, facial expression recognition is a hot research problem. Recent years, the research has moved from the lab environment to in-the-wild circumstances. It is challenging, especially under extreme poses. But current expression detection systems are trying to avoid the pose effects and gain the general applicable ability. In this work, we solve the problem in the opposite approach. We consider the head poses and detect the expressions within special head poses. Our work includes two parts: detect the head pose and group it into one pre-defined head pose class; do facial expression recognize within each pose class. Our experiments show that the recognition results with pose class grouping are much better than that of direct recognition without considering poses. We combine the hand-crafted features, SIFT, LBP and geometric feature, with deep learning feature as the representation of the expressions. The handcrafted features are added into the deep learning framework along with the high level deep learning features. As a comparison, we implement SVM and random forest to as the prediction models. To train and test our methodology, we labeled the face dataset with 6 basic expressions.Comment: Published on ICGIP201

    Stability and reproducibility of co-electrospun brain-mimicking phantoms for quality assurance of diffusion MRI sequences

    Get PDF
    Grey and white matter mimicking phantoms are important for assessing variations in diffusion MR measures at a single time point and over an extended period of time. This work investigates the stability of brain-mimicking microfibre phantoms and reproducibility of their MR derived diffusion parameters. The microfibres were produced by co-electrospinning and characterized by scanning electron microscopy (SEM). Grey matter and white matter phantoms were constructed from random and aligned microfibres, respectively. MR data were acquired from these phantoms over a period of 33 months. SEM images revealed that only small changes in fibre microstructure occurred over 30 months. The coefficient of variation in MR measurements across all time-points was between 1.6% and 3.4% for MD across all phantoms and FA in white matter phantoms. This was within the limits expected for intra-scanner variability, thereby confirming phantom stability over 33 months. These specialised diffusion phantoms may be used in a clinical environment for intra and inter-site quality assurance purposes, and for validation of quantitative diffusion biomarkers

    Supervised coordinate descent method with a 3D bilinear model for face alignment and tracking

    Get PDF
    Face alignment and tracking play important roles in facial performance capture. Existing data-driven methods for monocular videos suffer from large variations of pose and expression. In this paper, we propose an efficient and robust method for this task by introducing a novel supervised coordinate descent method with 3D bilinear representation. Instead of learning the mapping between the whole parameters and image features directly with a cascaded regression framework in current methods, we learn individual sets of parameters mappings separately step by step by a coordinate descent mean. Because different parameters make different contributions to the displacement of facial landmarks, our method is more discriminative to current whole-parameter cascaded regression methods. Benefiting from a 3D bilinear model learned from public databases, the proposed method can handle the head pose changes and extreme expressions out of plane better than other 2D-based methods. We present the reliable result of face tracking under various head poses and facial expressions on challenging video sequences collected online. The experimental results show that our method outperforms state-of-art data-driven methods

    Fast algorithms for fitting active appearance models to unconstrained images

    Get PDF
    Fitting algorithms for Active Appearance Models (AAMs) are usually considered to be robust but slow or fast but less able to generalize well to unseen variations. In this paper, we look into AAM fitting algorithms and make the following orthogonal contributions: We present a simple “project-out” optimization framework that unifies and revises the most well-known optimization problems and solutions in AAMs. Based on this framework, we describe robust simultaneous AAM fitting algorithms the complexity of which is not prohibitive for current systems. We then go on one step further and propose a new approximate project-out AAM fitting algorithm which we coin extended project-out inverse compositional (E-POIC). In contrast to current algorithms, E-POIC is both efficient and robust. Next, we describe a part-based AAM employing a translational motion model, which results in superior fitting and convergence properties. We also show that the proposed AAMs, when trained “in-the-wild” using SIFT descriptors, perform surprisingly well even for the case of unseen unconstrained images. Via a number of experiments on unconstrained human and animal face databases, we show that our combined contributions largely bridge the gap between exact and current approximate methods for AAM fitting and perform comparably with state-of-the-art face alignment algorithms
    corecore