320,328 research outputs found
Recommended from our members
The Ca2+ transient as a feedback sensor controlling cardiomyocyte ionic conductances in mouse populations.
Conductances of ion channels and transporters controlling cardiac excitation may vary in a population of subjects with different cardiac gene expression patterns. However, the amount of variability and its origin are not quantitatively known. We propose a new conceptual approach to predict this variability that consists of finding combinations of conductances generating a normal intracellular Ca2+ transient without any constraint on the action potential. Furthermore, we validate experimentally its predictions using the Hybrid Mouse Diversity Panel, a model system of genetically diverse mouse strains that allows us to quantify inter-subject versus intra-subject variability. The method predicts that conductances of inward Ca2+ and outward K+ currents compensate each other to generate a normal Ca2+ transient in good quantitative agreement with current measurements in ventricular myocytes from hearts of different isogenic strains. Our results suggest that a feedback mechanism sensing the aggregate Ca2+ transient of the heart suffices to regulate ionic conductances
Supplier-induced demand for physiotherapy in the Netherlands
Empirical studies of supplier-induced demand in health care have mostly concentrated on the analysis of physician behaviour. In this article, the focus is on the economic determinants of physiotherapist behaviour in The Netherlands. It is shown that relative prices work as strong incentives to alter the mix of services supplied, conform to the model of revenue maximization under a production constraint. However, the time-series analysis also gives some indication that this ability to influence the demand for their services to increase hourly income is not fully exploited. The latter finding is inconsistent with pure income maximization but rather points to a trade-off between loss of revenue and demand manipulation. The fact that the choice of therapy varies with the pressure on provider incomes does cast some doubt on the appropriateness of the chosen patterns of treatment in terms of effectiveness
An Efficient Learning of Constraints For Semi-Supervised Clustering using Neighbour Clustering Algorithm
Data mining is the process of finding the previously unknown and potentially interesting patterns and relation in database. Data mining is the step in the knowledge discovery in database process (KDD) .The structures that are the outcome of the data mining process must meet certain condition so that these can be considered as knowledge. These conditions are validity, understandability, utility, novelty, interestingness. Researcher identifies two fundamental goals of data mining: prediction and description. The proposed research work suggests the semi-supervised clustering problem where to know (with varying degree of certainty) that some sample pairs are (or are not) in the same class. A probabilistic model for semi-supervised clustering based on Shared Semi-supervised Neighbor clustering (SSNC) that provides a principled framework for incorporating supervision into prototype-based clustering. Semi-supervised clustering that combines the constraint-based and fitness-based approaches in a unified model. The proposed method first divides the Constraint-sensitive assignment of instances to clusters, where points are assigned to clusters so that the overall distortion of the points from the cluster centroids is minimized, while a minimum number of must-link and cannot-link constraints are violated. Experimental results across UCL Machine learning semi-supervised dataset results show that the proposed method has higher F-Measures than many existing Semi-Supervised Clustering methods
Frequent Lexicographic Algorithm for Mining Association Rules
The recent progress in computer storage technology have enable many organisations to collect and store a huge amount of data which is lead to growing demand for new
techniques that can intelligently transform massive data into useful information and knowledge. The concept of data mining has brought the attention of business community
in finding techniques that can extract nontrivial, implicit, previously unknown and potentially useful information from databases. Association rule mining is one of the data mining techniques which discovers strong association or correlation relationships among
data. The primary concept of association rule algorithms consist of two phase procedure. In the first phase, all frequent patterns are found and the second phase uses these
frequent patterns in order to generate all strong rules. The common precision measures used to complete these phases are support and confidence. Having been investigated
intensively during the past few years, it has been shown that the first phase involves a major computational task. Although the second phase seems to be more straightforward,
it can be costly because the size of the generated rules are normally large and in contrast only a small fraction of these rules are typically useful and important. As response to these challenges, this study is devoted towards finding faster methods for searching
frequent patterns and discovery of association rules in concise form. An algorithm called Flex (Frequent lexicographic patterns) has been proposed in obtaining a good performance of searching li-equent patterns. The algorithm involved the construction of the nodes of a lexicographic tree that represent frequent patterns. Depth
first strategy and vertical counting strategy are used in mining frequent patterns and computing the support of the patterns respectively. The mined frequent patterns are then used in generating association rules. Three models
were applied in this task which consist of traditional model, constraint model and representative model which produce three kinds of rules respectively; all association
rules, association rules with 1-consequence and representative rules. As an additional
utility in the representative model, this study proposed a set-theoretical intersection to
assist users in finding duplicated rules.
Four datasets from UCI machine learning repositories and domain theories except the
pumsb dataset were experimented. The Flex algorithm and the other two existing
algorithms Apriori and DIC under the same specification are tested toward these datasets
and their extraction times for mining frequent patterns were recorded and compared. The
experimental results showed that the proposed algorithm outperformed both existing algorithms especially for the case of long patterns. It also gave promising results in the
case of short patterns. Two of the datasets were then chosen for further experiment on
the scalability of the algorithms by increasing their size of transactions up to six times.
The scale-up experiment showed that the proposed algorithm is more scalable than the other existing algorithms.
The implementation of an adopted theory of representative model proved that this model is more concise than the other two models. It is shown by number of rules
generated from the chosen models. Besides a small set of rules obtained, the representative model also having the lossless information and soundness properties
meaning that it covers all interesting association rules and forbid derivation of weak
rules. It is theoretically proven that the proposed set-theoretical intersection is able to
assist users in knowing the duplication rules exist in representative model
Learning image components for object recognition
In order to perform object recognition it is necessary to learn representations of the underlying components of images. Such components correspond to objects, object-parts, or features. Non-negative matrix factorisation is a generative model that has been specifically proposed for finding such meaningful representations of image data, through the use of non-negativity constraints on the factors. This article reports on an empirical investigation of the performance of non-negative matrix factorisation algorithms. It is found that such algorithms need to impose additional constraints on the sparseness of the factors in order to successfully deal with occlusion. However, these constraints can themselves result in these algorithms failing to identify image components under certain conditions. In contrast, a recognition model (a competitive learning neural network algorithm) reliably and accurately learns representations of elementary image features without such constraints
- …