366 research outputs found
Établissement des intervalles de référence des valeurs de l’ECG chez le chat sain
La cardiologie féline est une discipline en constante progression, et fait appel à différents examens complémentaires. Parmi ceux-ci, l’électrocardiogramme (ECG) reste le seul moyen d’explorer les anomalies du rythme. Les intervalles de référence (IR) disponibles à ce jour chez le chat présentent des limites à leur utilisation en pratique : une faible représentativité statistique, sans intervalles de confiance, et/ou des méthodes d’acquisition de l’ECG contraignantes pour l’animal, comme la position en décubitus latéral ou l’anesthésie. Notre étude a permis, conformément aux recommandations internationales, l’établissement des IR des valeurs de l’ECG obtenus sur 147 chats sains, d’après leur examen clinique et leurs anamnèsecommémoratifs, et après consentement éclairé du propriétaire. Pour limiter leur stress, les ECG ont été acquis en décubitus sternal, sans aucune contention ni sédation. Les IR obtenus dans cette étude par la méthode non paramétrique, applicable sur des données quel que soit leur type de distribution, diffèrent légèrement de ceux de Tilley et Gompf (1976). Le poids, le sexe et l’âge n’ont aucune influence sur les valeurs de l’ECG. D’autre part, l’analyse des tracés a mis en évidence une forte prévalence (10%) d’anomalies de conduction chez des chats asymptomatiques (bloc de branche droit et gauche, bloc fasciculaire antérieur gauche)
Clustering as an example of optimizing arbitrarily chosen objective functions
This paper is a reflection upon a common practice of solving various types of learning problems by optimizing arbitrarily chosen criteria in the hope that they are well correlated with the criterion actually used for assessment of the results. This issue has been investigated using clustering as an example, hence a unified view of clustering as an optimization problem is first proposed, stemming from the belief that typical design choices in clustering, like the number of clusters or similarity measure can be, and often are suboptimal, also from the point of view of clustering quality measures later used for algorithm comparison and ranking. In order to illustrate our point we propose a generalized clustering framework and provide a proof-of-concept using standard benchmark datasets and two popular clustering methods for comparison
Beginning an optometric practice: Partnerships, locations, evaluations, valuations, and contractual agreements
Beginning an optometric practice: Partnerships, locations, evaluations, valuations, and contractual agreement
Finding groups in data: Cluster analysis with ants
Wepresent in this paper a modification of Lumer and Faieta’s algorithm for data clustering. This approach
mimics the clustering behavior observed in real ant colonies. This algorithm discovers automatically
clusters in numerical data without prior knowledge of possible number of clusters. In this paper we focus
on ant-based clustering algorithms, a particular kind of a swarm intelligent system, and on the effects on
the final clustering by using during the classification differentmetrics of dissimilarity: Euclidean, Cosine,
and Gower measures. Clustering with swarm-based algorithms is emerging as an alternative to more
conventional clustering methods, such as e.g. k-means, etc. Among the many bio-inspired techniques, ant
clustering algorithms have received special attention, especially because they still require much
investigation to improve performance, stability and other key features that would make such algorithms
mature tools for data mining.
As a case study, this paper focus on the behavior of clustering procedures in those new approaches.
The proposed algorithm and its modifications are evaluated in a number of well-known benchmark
datasets. Empirical results clearly show that ant-based clustering algorithms performs well when
compared to another techniques
La Termogènesi als calorímetres per conducció: les transformacions sòlid-sòlid i les barreges líquides
Descrivim alguns mètodes d'obtenció de funcions de
transferència associades a fenòmens reals i donem exemples
de les termogènesis obtingudes en aquests casos.
Els calorimètres amb molt bones característiques dinàmiques (θn∼3Hz) són molt adequats per a l'estudi de fenòmens transitoris. En aquest treball presenten en primer lloc resultats relatius a la transformació β → γ' de l'aliatge Cu-
Zn-Al. La transformació presenta un caràcter molt discontinu, una dissipació energètica important, i una excellent correlació amb l'emissió acústica generada durant el procés de transformació que permet donar una valoració qualitativa de les possibilitats calorimètriques de l'anàlisi entàlpica diferencial.
En segon lloc presentem una anàlisi de les entalpies
d'excés en les barreges líquides. Aquest estudi és molt interessant a baixes concentracions. L'ús de sistemes d'injecció permet assolir fraccions molars de solut xs\gtrsim 0.01. L'obtenció d'una funció de transferència correcta del sistema calorimètric i l'ús d'algorismes deconvolutius eficaços permet reduir la fracció molar a xs\gtrsim 0.001.This paper presents several methods to obtain transfer
functions associated with power dissipations in actual phenomena and a few examples of the approximate thermogenesis obtained.
On the one hand, calorimeters with extremely good dynamic
characteristics (θn∼3Hz) allow the study of structural transformations in solids. We present results concerning the martensitic transformation β → γ' of a Cu-Zn-Al alloy. They show the jerky character of the transformation very well correlated with acoustic emission patterns and an important energy 1iberation. This analysis gives an estimate of the posibilities of calorimetry within the field of Differential Enthalpic Analysis.
On the other hand, an analysis of the properties of
liquid mixtures at low concentrations is very interesting
when carried out their excess enthalpies.
Steady injection systems allow to reach solute molar
fractions xs\gtrsim 0.01. We describe here the obtention of a correct transfer function. Now, the application of proper deconvolutive algorithms make it possible to work at so low concentrations as xs\gtrsim 0.001
Proarrhythmic remodelling of the right ventricle in a porcine model of repaired tetralogy of Fallot
OBJECTIVE: The growing adult population with surgically corrected tetralogy of Fallot (TOF) is at risk of arrhythmias and sudden cardiac death. We sought to investigate the contribution of right ventricular (RV) structural and electrophysiological remodelling to arrhythmia generation in a preclinical animal model of repaired TOF (rTOF). METHODS AND RESULTS: Pigs mimicking rTOF underwent cardiac MRI functional characterisation and presented with pulmonary regurgitation, RV hypertrophy, dilatation and dysfunction compared with Sham-operated animals (Sham). Optical mapping of rTOF RV-perfused wedges revealed a significant prolongation of RV activation time with slower conduction velocities and regions of conduction slowing well beyond the surgical scar. A reduced protein expression and lateralisation of Connexin-43 were identified in rTOF RVs. A remodelling of extracellular matrix-related gene expression and an increase in collagen content that correlated with prolonged RV activation time were also found in these animals. RV action potential duration (APD) was prolonged in the epicardial anterior region at early and late repolarisation level, thus contributing to a greater APD heterogeneity and to altered transmural and anteroposterior APD gradients in rTOF RVs. APD remodelling involved changes in Kv4.3 and MiRP1 expression. Spontaneous arrhythmias were more frequent in rTOF wedges and more complex in the anterior than in the posterior RV. CONCLUSION: Significant remodelling of RV conduction and repolarisation properties was found in pigs with rTOF. This remodelling generates a proarrhythmic substrate likely to facilitate re-entries and to contribute to sudden cardiac death in patients with rTOF
Warped K-Means: An algorithm to cluster sequentially-distributed data
[EN] Many devices generate large amounts of data that follow some sort of sequentiality, e.g.,
motion sensors, e-pens, eye trackers, etc. and often these data need to be compressed for
classification, storage, and/or retrieval tasks. Traditional clustering algorithms can be used
for this purpose, but unfortunately they do not cope with the sequential information
implicitly embedded in such data. Thus, we revisit the well-known K-means algorithm
and provide a general method to properly cluster sequentially-distributed data. We present
Warped K-Means (WKM), a multi-purpose partitional clustering procedure that minimizes
the sum of squared error criterion, while imposing a hard sequentiality constraint in the
classification step. We illustrate the properties of WKM in three applications, one being
the segmentation and classification of human activity. WKM outperformed five state-of-
the-art clustering techniques to simplify data trajectories, achieving a recognition accuracy
of near 97%, which is an improvement of around 66% over their peers. Moreover, such an
improvement came with a reduction in the computational cost of more than one order of
magnitude.This work has been partially supported by Casmacat (FP7-ICT-2011-7, Project 287576), tranScriptorium (FP7-ICT-2011-9, Project 600707), STraDA (MINECO, TIN2012-37475-0O2-01), and ALMPR (GVA, Prometeo/20091014) projects.Leiva Torres, LA.; Vidal, E. (2013). Warped K-Means: An algorithm to cluster sequentially-distributed data. Information Sciences. 237:196-210. https://doi.org/10.1016/j.ins.2013.02.042S19621023
Reliability of resistivity quantification for shallow subsurface water processes
The reliability of surface-based electrical resistivity tomography (ERT) for
quantifying resistivities for shallow subsurface water processes is analysed. A
method comprising numerical simulations of water movement in soil and
forward-inverse modeling of ERT surveys for two synthetic data sets is
presented. Resistivity contrast, e.g. by changing water content, is shown to
have large influence on the resistivity quantification.
An ensemble and clustering approach is introduced in which ensembles of 50
different inversion models for one data set are created by randomly varying the
parameters for a regularisation based inversion routine. The ensemble members
are sorted into five clusters of similar models and the mean model for each
cluster is computed. Distinguishing persisting features in the mean models from
singular artifacts in individual tomograms can improve the interpretation of
inversion results.
Especially in the presence of large resistivity contrasts in high sensitivity
areas, the quantification of resistivities can be unreliable. The ensemble
approach shows that this is an inherent problem present for all models inverted
with the regularisation based routine. The results also suggest that the
combination of hydrological and electrical modeling might lead to better
results.Comment: 12 figure
Separation of poliovirus and poliovirus RNA on Sephadex G 200
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/41675/1/705_2005_Article_BF01241426.pd
- …