186 research outputs found

    An objective based classification of aggregation techniques for wireless sensor networks

    No full text
    Wireless Sensor Networks have gained immense popularity in recent years due to their ever increasing capabilities and wide range of critical applications. A huge body of research efforts has been dedicated to find ways to utilize limited resources of these sensor nodes in an efficient manner. One of the common ways to minimize energy consumption has been aggregation of input data. We note that every aggregation technique has an improvement objective to achieve with respect to the output it produces. Each technique is designed to achieve some target e.g. reduce data size, minimize transmission energy, enhance accuracy etc. This paper presents a comprehensive survey of aggregation techniques that can be used in distributed manner to improve lifetime and energy conservation of wireless sensor networks. Main contribution of this work is proposal of a novel classification of such techniques based on the type of improvement they offer when applied to WSNs. Due to the existence of a myriad of definitions of aggregation, we first review the meaning of term aggregation that can be applied to WSN. The concept is then associated with the proposed classes. Each class of techniques is divided into a number of subclasses and a brief literature review of related work in WSN for each of these is also presented

    Use of ensemble based on GA for imbalance problem

    Get PDF
    In real-world applications, it has been observed that class imbalance (significant differences in class prior probabilities) may produce an important deterioration of the classifier performance, in particular with patterns belonging to the less represented classes. One method to tackle this problem consists to resample the original training set, either by over-sampling the minority class and/or under-sampling the majority class. In this paper, we propose two ensemble models (using a modular neural network and the nearest neighbor rule) trained on datasets under-sampled with genetic algorithms. Experiments with real datasets demonstrate the effectiveness of the methodology here propose

    Fulminant hepatitis in a tropical population: clinical course, cause, and early predictors of outcome

    Get PDF
    The profiles of patients with fulminant hepatic failure (FHF) from developing countries have not been reported earlier. The current study was conducted prospectively, at a single tertiary care center in India, to document the demographic and clinical characteristics, natural course, and causative profile of patients with FHF as well as to define simple prognostic markers in these patients. Four hundred twenty-three consecutive patients with FHF admitted from January 1987 to June 1993 were included in the study. Each patient's serum was tested for various hepatotropic viruses. Univariate Cox's regression for 28 variables, multivariate Cox's proportional hazard regression, stepwise logistic regression, and Kaplan-Meier survival analysis were done to identify independent predictors of outcome at admission. All patients presented with encephalopathy within 4 weeks of onset of symptoms. Hepatotropic viruses were the likely cause in most of these patients. Hepatitis A (HAV), hepatitis B (HBV), hepatitis D (HDV) viruses, and antitubercular drugs could be implicated as the cause of FHF in 1.7% (n = 7), 28% (n = 117), 3.8% (n = 16), and 4.5% (n = 19) patients, respectively. In the remaining 62% (n = 264) of patients the serological evidence of HAV, HBV, or HDV infection was lacking, and none of them had ingested hepatotoxins. FHF was presumed to be caused by non-A, non-B virus(es) infection. Sera of 50 patients from the latter group were tested for hepatitis E virus (HEV) RNA and HCV RNA. In 31 (62%), HEV could be implicated as the causative agent, and isolated HCV RNA could be detected in 7 (19%). Two hundred eighty eight (66%) patients died. Approximately 75% of those who died did so within 72 hours of hospitalisation. One quarter of the female patients with FHF were pregnant. Mortality among pregnant females, nonpregnant females, and male patients with FHF was similar (P > .1). Univariate analysis showed that age, size of the liver assessed by percussion, grade of coma, presence of clinical features of cerebral edema, presence of infection, serum bilirubin, and prothrombin time prolongation over controls at admission were related to survival (P < .01). The rapidity of onset of encephalopathy and cause of FHF did not influence the outcome. Cox's proportional hazard regression showed age ≥ 40 years, presence of cerebral edema, serum bilirubin ≥ 15 mg/dL, and prothrombin time prolongation of 25 seconds or more over controls were independent predictors of outcome. Ninety-three percent of the patients with three or more of the above prognostic markers died. The sensitivity, specificity, positive predictive value, and the negative predictive value of the presence of three or more of these prognostic factors for mortality was 93%, 80%, 86%, and 89.5%, respectively, with a diagnostic accuracy of 87.3%. We conclude that most of our patients with FHF might have been caused by hepatotropic viral infection, and non-A, non-B virus(es) seems to be the dominant hepatotropic viral infection among these patients. They presented with encephalopathy within 4 weeks of the onset of symptoms. Pregnancy, cause, and rapidity of onset of encephalopathy did not influence survival. The prognostic model developed in the current study is simple and can be performed at admission

    An adaptive version of k-medoids to deal with the uncertainty in clustering heterogeneous data using an intermediary fusion approach

    Get PDF
    This paper introduces Hk-medoids, a modified version of the standard k-medoids algorithm. The modification extends the algorithm for the problem of clustering complex heterogeneous objects that are described by a diversity of data types, e.g. text, images, structured data and time series. We first proposed an intermediary fusion approach to calculate fused similarities between objects, SMF, taking into account the similarities between the component elements of the objects using appropriate similarity measures. The fused approach entails uncertainty for incomplete objects or for objects which have diverging distances according to the different component. Our implementation of Hk-medoids proposed here works with the fused distances and deals with the uncertainty in the fusion process. We experimentally evaluate the potential of our proposed algorithm using five datasets with different combinations of data types that define the objects. Our results show the feasibility of the our algorithm, and also they show a performance enhancement when comparing to the application of the original SMF approach in combination with a standard k-medoids that does not take uncertainty into account. In addition, from a theoretical point of view, our proposed algorithm has lower computation complexity than the popular PAM implementation

    Is EC class predictable from reaction mechanism?

    Get PDF
    We thank the Scottish Universities Life Sciences Alliance (SULSA) and the Scottish Overseas Research Student Awards Scheme of the Scottish Funding Council (SFC) for financial support.Background: We investigate the relationships between the EC (Enzyme Commission) class, the associated chemical reaction, and the reaction mechanism by building predictive models using Support Vector Machine (SVM), Random Forest (RF) and k-Nearest Neighbours (kNN). We consider two ways of encoding the reaction mechanism in descriptors, and also three approaches that encode only the overall chemical reaction. Both cross-validation and also an external test set are used. Results: The three descriptor sets encoding overall chemical transformation perform better than the two descriptions of mechanism. SVM and RF models perform comparably well; kNN is less successful. Oxidoreductases and hydrolases are relatively well predicted by all types of descriptor; isomerases are well predicted by overall reaction descriptors but not by mechanistic ones. Conclusions: Our results suggest that pairs of similar enzyme reactions tend to proceed by different mechanisms. Oxidoreductases, hydrolases, and to some extent isomerases and ligases, have clear chemical signatures, making them easier to predict than transferases and lyases. We find evidence that isomerases as a class are notably mechanistically diverse and that their one shared property, of substrate and product being isomers, can arise in various unrelated ways. The performance of the different machine learning algorithms is in line with many cheminformatics applications, with SVM and RF being roughly equally effective. kNN is less successful, given the role that non-local information plays in successful classification. We note also that, despite a lack of clarity in the literature, EC number prediction is not a single problem; the challenge of predicting protein function from available sequence data is quite different from assigning an EC classification from a cheminformatics representation of a reaction.Publisher PDFPeer reviewe

    The diagnostic accuracy of US, CT, MRI and 1H-MRS for the evaluation of hepatic steatosis compared with liver biopsy: a meta-analysis

    Get PDF
    OBJECTIVE: To meta-analyse the diagnostic accuracy of US, CT, MRI and (1)H-MRS for the evaluation of hepatic steatosis. METHODS: From a comprehensive literature search in MEDLINE, EMBASE, CINAHL and Cochrane (up to November 2009), articles were selected that investigated the diagnostic performance imaging techniques for evaluating hepatic steatosis with histopathology as the reference standard. Cut-off values for the presence of steatosis on liver biopsy were subdivided into four groups: (1) >0, >2 and >5% steatosis; (2) >10, >15 and >20%; (3) >25, >30 and >33%; (4) >50, >60 and >66%. Per group, summary estimates for sensitivity and specificity were calculated. The natural-logarithm of the diagnostic odds ratio (lnDOR) was used as a single indicator of test performance. RESULTS: 46 articles were included. Mean sensitivity estimates for subgroups were 73.3-90.5% (US), 46.1-72.0% (CT), 82.0-97.4% (MRI) and 72.7-88.5% ((1)H-MRS). Mean specificity ranges were 69.6-85.2% (US), 88.1-94.6% (CT), 76.1-95.3% (MRI) and 92.0-95.7% ((1)H-MRS). Overall performance (lnDOR) of MRI and (1)H-MRS was better than that for US and CT for all subgroups, with significant differences in groups 1 and 2. CONCLUSION: MRI and (1)H-MRS can be considered techniques of choice for accurate evaluation of hepatic steatosi

    Guidelines for the use and interpretation of assays for monitoring autophagy (4th edition)

    Get PDF

    Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition)

    Get PDF
    In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure fl ux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defi ned as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (inmost higher eukaryotes and some protists such as Dictyostelium ) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the fi eld understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation it is imperative to delete or knock down more than one autophagy-related gene. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways so not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field

    Guidelines for the use and interpretation of assays for monitoring autophagy (4th edition)1.

    Get PDF
    In 2008, we published the first set of guidelines for standardizing research in autophagy. Since then, this topic has received increasing attention, and many scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Thus, it is important to formulate on a regular basis updated guidelines for monitoring autophagy in different organisms. Despite numerous reviews, there continues to be confusion regarding acceptable methods to evaluate autophagy, especially in multicellular eukaryotes. Here, we present a set of guidelines for investigators to select and interpret methods to examine autophagy and related processes, and for reviewers to provide realistic and reasonable critiques of reports that are focused on these processes. These guidelines are not meant to be a dogmatic set of rules, because the appropriateness of any assay largely depends on the question being asked and the system being used. Moreover, no individual assay is perfect for every situation, calling for the use of multiple techniques to properly monitor autophagy in each experimental setting. Finally, several core components of the autophagy machinery have been implicated in distinct autophagic processes (canonical and noncanonical autophagy), implying that genetic approaches to block autophagy should rely on targeting two or more autophagy-related genes that ideally participate in distinct steps of the pathway. Along similar lines, because multiple proteins involved in autophagy also regulate other cellular pathways including apoptosis, not all of them can be used as a specific marker for bona fide autophagic responses. Here, we critically discuss current methods of assessing autophagy and the information they can, or cannot, provide. Our ultimate goal is to encourage intellectual and technical innovation in the field
    • 

    corecore