211,963 research outputs found

    Development of a headrig process information system : a thesis presented in partial fulfilment of the requirements for the degree of Masters of Technology in Computer Systems Engineering at Massey University

    Get PDF
    A computer-based process information system was developed to gather operational information about the headrig handsaw at the Timber Technology Centre (TiTC) sawmill in the Waiariki Institute of Technology, store the data in a database, and display the information in various forms to the user. The project was the first part of an encompassing programme to instrument an entire commercial sawmill. This research programme aims to determine which variables are crucial to quantifying the sawing processes and to investigate the best techniques for measuring the variables. The system developed was extremely modular. Both analysis modules and sensor hardware can be added or removed without any need for restarting the system. A client-server architecture using networking communications was used to facilitate this. A central server gathers and stores the data, and individual clients analyse the data and display the information to the user. This enables analysis modules to be added and removed without even restarting the system. An experiment to determine the effect of wood density on the variables measured was used to test the viability of the completed system. The system successfully gathered all of the information required for the experiment and performed 70% of the data collation and analysis automatically. The remainder was performed using spreadsheets as this was deemed to be the most suitable method. The loosely coupled design of the system allows it to be up-scaled to a mill-wide program easily. Experiments performed to gather information about pivotal process variables are currently being planned, and should be underway as the expansion into other machine stations is being designed

    CAGD-based computer vision

    Get PDF
    Journal ArticleThree-dimensional model-based computer vision uses geometric models of objects and sensed data to recognize objects in a scene. Likewise, Computer Aided Geometric Design (CAGD) systems are used to interactively generate three-dimensional models during the design process. Despite this similarity, there has been a dichotomy between these fields. Recently, the unification of CAGD and vision systems has become the focus of research in the context of manufacturing automation. This paper explores the connection between CAGD and computer vision. A method for the automatic generation of recognition strategies based on the geometric properties of shape has been devised and implemented. This uses a novel technique developed for quantifying the following properties of features which compose models used in computer vision: robustness, completeness, consistency, cost, and uniqueness. By utilizing this information, the automatic synthesis of a specialized recognition scheme, called a Strategy Tree, is accomplished. Strategy Trees describe, in a systematic and robust manner, the search process used for recognition and localization of particular objects in the given scene. They consist of selected features which satisfy system constraints and Corroborating Evidence Subtrees which are used in the formation of hypotheses. Verification techniques, used to substantiate or refute these hypotheses, are explored. Experiments utilizing 3-D data are presented

    CFD modelling of wind turbine airfoil aerodynamics

    Get PDF
    This paper reports the first findings of an ongoing research programme on wind turbine computational aerodynamics at the University of Glasgow. Several modeling aspects of wind turbine airfoil aerodynamics based on the solution of the Reynoldsaveraged Navier-Stokes (RANS) equations are addressed. One of these is the effect of an a priori method for structured grid adaptation aimed at improving the wake resolution. Presented results emphasize that the proposed adaptation strategy greatly improves the wake resolution in the far-field, whereas the wake is completely diffused by the non-adapted grid with the same number and distribution of grid nodes. A grid refinement analysis carried out with the adapted grid shows that the improvements of flow resolution thus achieved are of a smaller magnitude with respect to those accomplished by adapting the grid keeping constant the number of nodes. The proposed adaptation approach can be easily included in the structured generation process of both commercial and in-house structured mesh generators systems. The study also aims at quantifying the solution inaccuracy arising from not modeling the laminar-to-turbulent transition. It is found that the drag forces obtained by considering the flow as transitional or fully turbulent may differ by 50 %. The impact of various turbulence models on the predicted aerodynamic forces is also analyzed. All these issues are investigated using a special-purpose hyperbolic grid generator and a multi-block structured finitevolume RANS code. The numerical experiments consider the flow field past a wind turbine airfoil for which an exhaustive campaign of steady and unsteady experimental measurements was conducted. The predictive capabilities of the CFD solver are validated by comparing experimental data and numerical predictions for selected flow regimes. The incompressible analysis and design code XFOIL is also used to support the findings of the comparative analysis of numerical RANS-based results and experimental data

    Quantifying Facial Age by Posterior of Age Comparisons

    Full text link
    We introduce a novel approach for annotating large quantity of in-the-wild facial images with high-quality posterior age distribution as labels. Each posterior provides a probability distribution of estimated ages for a face. Our approach is motivated by observations that it is easier to distinguish who is the older of two people than to determine the person's actual age. Given a reference database with samples of known ages and a dataset to label, we can transfer reliable annotations from the former to the latter via human-in-the-loop comparisons. We show an effective way to transform such comparisons to posterior via fully-connected and SoftMax layers, so as to permit end-to-end training in a deep network. Thanks to the efficient and effective annotation approach, we collect a new large-scale facial age dataset, dubbed `MegaAge', which consists of 41,941 images. Data can be downloaded from our project page mmlab.ie.cuhk.edu.hk/projects/MegaAge and github.com/zyx2012/Age_estimation_BMVC2017. With the dataset, we train a network that jointly performs ordinal hyperplane classification and posterior distribution learning. Our approach achieves state-of-the-art results on popular benchmarks such as MORPH2, Adience, and the newly proposed MegaAge.Comment: To appear on BMVC 2017 (oral) revised versio

    Follow-back Recommendations for Sports Bettors: A Twitter-based Approach

    Get PDF
    Social network based recommender systems are powered by a complex web of social discussions and user connections. Short text microblogs e.g. Twitter present powerful frameworks for information consumption, due to their real-time nature in content throughput as well as user connections. Therefore, users on such platforms consume the disseminated content to a greater or lesser extent based on their interests. Quantifying this degree of interest is a difficult task based on the amount of information that such platforms generate at any given time. Thus, the generation of personalized profiles based on the Degree of Interest (DoI) that users have towards certain topics in such short texts presents a research problem. We address this challenge by following a two-step process in generation of personalized sports betting related user profiles in tweets as a case study. We (i) compute the Degree of Interest in Sports Betting (DoiSB) of tweeters and (ii) affirm this DoiSB by correlating it with their friendship network. This is an integral process in the design of a short text based recommender systems for users to follow i.e follow-back recommendations as well as content-based recommendations relying on the interests of users on such platforms. In this paper, we described the DoiSB computation and follow-back recommendation process by building a vector representation model for tweets. We then use this model to profile users interested in sports betting. Experiments using real Twitter dataset geolocated to Kenya shows the effectiveness of our approach in the identification of tweeter\u27s DoiSBs as well as their correlation with their friendship network

    Explaining by Imitating: Understanding Decisions by Interpretable Policy Learning

    Full text link
    Understanding human behavior from observed data is critical for transparency and accountability in decision-making. Consider real-world settings such as healthcare, in which modeling a decision-maker's policy is challenging -- with no access to underlying states, no knowledge of environment dynamics, and no allowance for live experimentation. We desire learning a data-driven representation of decision-making behavior that (1) inheres transparency by design, (2) accommodates partial observability, and (3) operates completely offline. To satisfy these key criteria, we propose a novel model-based Bayesian method for interpretable policy learning ("Interpole") that jointly estimates an agent's (possibly biased) belief-update process together with their (possibly suboptimal) belief-action mapping. Through experiments on both simulated and real-world data for the problem of Alzheimer's disease diagnosis, we illustrate the potential of our approach as an investigative device for auditing, quantifying, and understanding human decision-making behavior

    Advancing image quantification methods and tools for analysis of nanoparticle electrokinetics

    Get PDF
    Image processing methods and techniques for high-throughput quantification of dielectrophoretic (DEP) collections onto planar castellated electrode arrays are developed and evaluated. Fluorescence-based dielectrophoretic spectroscopy is an important tool for laboratory investigations of AC electrokinetic properties of nanoparticles. This paper details new, first principle, theoretical and experimental developments of geometric feature recognition techniques that enable quantification of positive dielectrophoretic (pDEP) nanoparticle collections onto castellated arrays. As an alternative to the geometric-based method, novel statistical methods that do not require any information about array features, are also developed using the quantile and standard deviation functions. Data from pDEP collection and release experiments using 200 nm diameter latex nanospheres demonstrates that pDEP quantification using the statistic-based methods yields quantitatively similar results to the geometric-based method. The development of geometric- and statistic-based quantification methods enables high-throughput, supervisor-free image processing tools critical for dielectrophoretic spectroscopy and automated DEP technology development

    Impact of variance components on reliability of absolute quantification using digital PCR

    Get PDF
    Background: Digital polymerase chain reaction (dPCR) is an increasingly popular technology for detecting and quantifying target nucleic acids. Its advertised strength is high precision absolute quantification without needing reference curves. The standard data analytic approach follows a seemingly straightforward theoretical framework but ignores sources of variation in the data generating process. These stem from both technical and biological factors, where we distinguish features that are 1) hard-wired in the equipment, 2) user-dependent and 3) provided by manufacturers but may be adapted by the user. The impact of the corresponding variance components on the accuracy and precision of target concentration estimators presented in the literature is studied through simulation. Results: We reveal how system-specific technical factors influence accuracy as well as precision of concentration estimates. We find that a well-chosen sample dilution level and modifiable settings such as the fluorescence cut-off for target copy detection have a substantial impact on reliability and can be adapted to the sample analysed in ways that matter. User-dependent technical variation, including pipette inaccuracy and specific sources of sample heterogeneity, leads to a steep increase in uncertainty of estimated concentrations. Users can discover this through replicate experiments and derived variance estimation. Finally, the detection performance can be improved by optimizing the fluorescence intensity cut point as suboptimal thresholds reduce the accuracy of concentration estimates considerably. Conclusions: Like any other technology, dPCR is subject to variation induced by natural perturbations, systematic settings as well as user-dependent protocols. Corresponding uncertainty may be controlled with an adapted experimental design. Our findings point to modifiable key sources of uncertainty that form an important starting point for the development of guidelines on dPCR design and data analysis with correct precision bounds. Besides clever choices of sample dilution levels, experiment-specific tuning of machine settings can greatly improve results. Well-chosen data-driven fluorescence intensity thresholds in particular result in major improvements in target presence detection. We call on manufacturers to provide sufficiently detailed output data that allows users to maximize the potential of the method in their setting and obtain high precision and accuracy for their experiments
    corecore