5,326 research outputs found

    Adaptive probability scheme for behaviour monitoring of the elderly using a specialised ambient device

    Get PDF
    A Hidden Markov Model (HMM) modified to work in combination with a Fuzzy System is utilised to determine the current behavioural state of the user from information obtained with specialised hardware. Due to the high dimensionality and not-linearly-separable nature of the Fuzzy System and the sensor data obtained with the hardware which informs the state decision, a new method is devised to update the HMM and replace the initial Fuzzy System such that subsequent state decisions are based on the most recent information. The resultant system first reduces the dimensionality of the original information by using a manifold representation in the high dimension which is unfolded in the lower dimension. The data is then linearly separable in the lower dimension where a simple linear classifier, such as the perceptron used here, is applied to determine the probability of the observations belonging to a state. Experiments using the new system verify its applicability in a real scenario

    Dimension reduction for linear separation with curvilinear distances

    Get PDF
    Any high dimensional data in its original raw form may contain obviously classifiable clusters which are difficult to identify given the high-dimension representation. In reducing the dimensions it may be possible to perform a simple classification technique to extract this cluster information whilst retaining the overall topology of the data set. The supervised method presented here takes a high dimension data set consisting of multiple clusters and employs curvilinear distance as a relation between points, projecting in a lower dimension according to this relationship. This representation allows for linear separation of the non-separable high dimensional cluster data and the classification to a cluster of any successive unseen data point extracted from the same higher dimension

    Comparative analysis of imaging configurations and objectives for Fourier microscopy

    Full text link
    Fourier microscopy is becoming an increasingly important tool for the analysis of optical nanostructures and quantum emitters. However, achieving quantitative Fourier space measurements requires a thorough understanding of the impact of aberrations introduced by optical microscopes, which have been optimized for conventional real-space imaging. Here, we present a detailed framework for analyzing the performance of microscope objectives for several common Fourier imaging configurations. To this end, we model objectives from Nikon, Olympus, and Zeiss using parameters that were inferred from patent literature and confirmed, where possible, by physical disassembly. We then examine the aberrations most relevant to Fourier microscopy, including the alignment tolerances of apodization factors for different objective classes, the effect of magnification on the modulation transfer function, and vignetting-induced reductions of the effective numerical aperture for wide-field measurements. Based on this analysis, we identify an optimal objective class and imaging configuration for Fourier microscopy. In addition, as a resource for future studies, the Zemax files for the objectives and setups used in this analysis have been made publicly available.Comment: For related figshare fileset with complete Zemax models of microscope objectives, tube lenses, and Fourier imaging configurations, see Ref. [41] (available at http://dx.doi.org/10.6084/m9.figshare.1481270

    Understanding Task Design Trade-offs in Crowdsourced Paraphrase Collection

    Full text link
    Linguistically diverse datasets are critical for training and evaluating robust machine learning systems, but data collection is a costly process that often requires experts. Crowdsourcing the process of paraphrase generation is an effective means of expanding natural language datasets, but there has been limited analysis of the trade-offs that arise when designing tasks. In this paper, we present the first systematic study of the key factors in crowdsourcing paraphrase collection. We consider variations in instructions, incentives, data domains, and workflows. We manually analyzed paraphrases for correctness, grammaticality, and linguistic diversity. Our observations provide new insight into the trade-offs between accuracy and diversity in crowd responses that arise as a result of task design, providing guidance for future paraphrase generation procedures.Comment: Published at ACL 201
    • …
    corecore