1,090 research outputs found
A study of localization metrics: Evaluation of position errors in wireless sensor networks
Cataloged from PDF version of article.For wireless sensor network applications that require location information for sensor
nodes, locations of nodes can be estimated by a number of localization algorithms, which
inevitably may introduce various types of errors in their estimations. How an application is
affected from errors and a location error metric’s response to errors may depend on the
error characteristics. Therefore it is important to use the right error metric to evaluate
the error performance of alternative localization techniques that is possible to use for an
application. To date, unfortunately, only simplistic error metrics that depend on the Euclidean
distance between an actual node position and its estimate in isolation to the rest of the
network has been considered for evaluation of localization algorithms. In this paper, we
first clarify the problem with this traditional approach and then propose some alternative
and new metrics that consider an overall network topology and its estimate in computing a
metric value. We compared the existing and new metrics via simulation experiments done
using some typical application and error scenarios, and observed that some new metrics
are more sensitive to some type of errors and therefore can distinguish better among alternative
localization algorithms for applications that are more sensitive to those types of
errors. We also go through a case study with some localization algorithms from literature
to give an idea about the practical use of our approach. Finally, we provide a step-by-step
guideline for selecting the best metric to use for a given sensor network application.(C)2011 Elsevier B.V. All rights reserved
On Security and reliability using cooperative transmissions in sensor networks
Cooperative transmissions have received recent attention and research papers have demonstrated their benefits for wireless networks. Such benefits include improving the reliability of links through diversity and/or increasing the reach of a link compared to a single transmitter transmitting to a single receiver (single-input single-output or SISO). In one form of cooperative transmissions, multiple nodes can act as virtual antenna elements and provide diversity gain or range improvement using space-time coding. In a multi-hop ad hoc or sensor network, a source node can make use of its neighbors as relays with itself to reach an intermediate node with greater reliability or at a larger distance than otherwise possible. The intermediate node will use its neighbors in a similar manner and this process continues till the destination is reached. Thus, for the same reliability of a link as SISO, the number of hops between a source and destination may be reduced using cooperative transmissions as each hop spans a larger distance. However, the presence of ma-licious or compromised nodes in the network impacts the benefits obtained with cooperative transmissions. Using more relays can increase the reach of a link, but if one or more relays are malicious, the transmission may fail. However, the relationships between the number of relays, the number of hops, and success probabilities are not trivial to determine. In this paper, we analyze this problem to understand the conditions under which cooperative transmissions fare better or worse than SISO transmissions. We take into consideration additional parameters such as the path-loss exponent and provide a framework that allows us to evaluate the conditions when cooperative transmissions are better than SISO transmissions. This analysis provides insights that can be employed before resorting to simulations or experimentation. © Springer Science+Business Media, LLC 2012
Distributed k-core view materialization and maintenance for large dynamic graphs
Cataloged from PDF version of article.In graph theory, k-core is a key metric used to identify subgraphs of high cohesion, also known as the ‘dense’
regions of a graph. As the real world graphs such as social network graphs grow in size, the contents get richer and the
topologies change dynamically, we are challenged not only to materialize k-core subgraphs for one time but also to maintain
them in order to keep up with continuous updates. Adding to the challenge is that real world data sets are outgrowing the
capacity of a single server and its main memory. These challenges inspired us to propose a new set of distributed algorithms
for k-core view construction and maintenance on a horizontally scaling storage and computing platform. Our algorithms execute
against the partitioned graph data in parallel and take advantage of k-core properties to aggressively prune unnecessary
computation. Experimental evaluation results demonstrated orders of magnitude speedup and advantages of maintaining k-core
incrementally and in batch windows over complete reconstruction. Our algorithms thus enable practitioners to create and
maintain many k-core views on different topics in rich social network content simultaneously
PTPN22 gene polymorphism in Takayasu's arteritis
Objective. Takayasu's arteritis (TA) is a chronic, rare granulomatous panarteritis of unknown aetiology involving mainly the aorta and its major branches. In this study, genetic susceptibility to TA has been investigated by screening the functional single nucleotide polymorphism (SNP) of PTPN22 gene encoding the lymphoid-specific protein tyrosine phosphatase. Methods. Totally, 181 patients with TA and 177 healthy controls are genotyped by PCR-RFLP method for the SNP rs2476601 (A/G) of PTPN22 gene. Polymorphic region was amplified by PCR and digested with Xcm I enzyme. Results. Detected frequencies of heterozygous genotype (AG) were 5.1% (9/177) in control group and 3.8% (7/181) in TA group (P = 0.61, odds ratio: 0.75, 95% CI: 0.3, 2.0). No association with angiographic type, vascular involvement or prognosis of TA was observed either. Conclusion. The distribution of PTPN22 polymorphism did not reveal any association with TA in Turkey. © The Author 2008. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved
The role of explicit contrast in adjective acquisition: a cross-linguistic longitudinal study of adjective production in spontaneous child speech and parental input
Experimental studies demonstrate that contrast helps toddlers to extend the meanings of novel adjectives. This study explores whether antonym co-occurrence in spontaneous speech also has an effect on adjective use by the child. The authors studied adjective production in longitudinal speech samples from 16 children (16–36 months) acquiring eight different languages. Adjectives in child speech and child-directed speech were coded as either unrelated or related to a contrastive term in the preceding context. Results show large differences between children in the growth of adjective production. These differences are strongly related to contrast use. High contrast users not only increase adjective use earlier, but also reach a stable level of adjective production in the investigated period. Average or low contrast users increase their adjective production more slowly and do not reach a plateau in the period covered by this study. Initially there is a strong relation between contrast use in child speech and child-directed speech, but this relation diminishes with age. </jats:p
Recommended from our members
The potential power of suffering: Post-Traumatic growth in women following pregnancy loss
Pregnancy loss remains a stigmatised experience, with many women encountering barriers to sharing their loss, which may impede positive psychological adjustment (Freedle & Oliviera, 2021). While distress disclosure has been shown to predict Post-Traumatic Growth (PTG), the mechanisms behind this relationship, particularly the roles of deliberate and intrusive rumination, are not fully understood (Alcarez-Calle & Chaves, 2023). This study aimed to examine the connections between self-disclosure, deliberate rumination, intrusive rumination, and PTG in women after pregnancy loss, explicitly investigating whether both types of rumination mediate the link between self-disclosure and PTG. A cross-sectional, online study was conducted with women who experienced single or multiple miscarriages at least 12 months ago. A total of 67 participants completed the Event-Related Rumination Inventory, Distress-Disclosure Index, and Post-Traumatic Growth Inventory. Data were analysed using hierarchical regression and mediation analyses. Hierarchical regression showed that self-disclosure and deliberate rumination positively predicted PTG, whereas intrusive rumination was not a significant predictor. Furthermore, the engagement in self-disclosure was linked to increased deliberate rumination and reduced intrusive rumination. Finally, the relation between self-disclosure and PTG was mediated by deliberate rumination but not by intrusive rumination. The findings suggest that disclosing emotional distress following pregnancy loss may lead to PTG through cognitive processing in which individuals articulate and elaborate on their thoughts and feelings. This study did not find evidence for the potential negative impact of intrusive rumination as it did not predict PTG or mediate the relation between self-disclosure and PTG. This highlights the dynamic nature of cognitive processing following trauma and contributes to the literature by providing support for applying PTG theory to women who have experienced pregnancy loss
Evaluation of Sous-Vide Technology in Gastronomy
Sous vide is cooked under control by applying a certain temperature (65-96 oC) / time after vacuuming in the package of the food which is formed alone or with other auxiliary products (sauce-spices) and stored under cold conditions (1-4 oC) by rapidly reducing the temperature after heat application This process is also known as lapping, vacuum cooking, vacuum packed cooking with vacuum pack or baking-cooling with vacuum packaged. In the products prepared by this technology, the blocking effect provided by oxidative and aerobic bacteria development through vacuum packaging combines with microbial protection effect provided by pasteurization; Thanks to the applied cold chain, a long and safe shelf life is provided and consumed. Food can be made reliable by pasteurizing at low temperatures, and even safely consumed without cracking and crunchy foods. In addition to all these advantages, sous vide technology also has some disadvantages. When the investigations are examined, it has been concluded that the research on sous vide technology\u27s reliability in terms of health should be intensified
Evaluation of Sous-Vide Technology in Gastronomy
Sous vide is cooked under control by applying a certain temperature (65-96 oC) / time after vacuuming in the package of the food which is formed alone or with other auxiliary products (sauce-spices) and stored under cold conditions (1-4 oC) by rapidly reducing the temperature after heat application This process is also known as lapping, vacuum cooking, vacuum packed cooking with vacuum pack or baking-cooling with vacuum packaged. In the products prepared by this technology, the blocking effect provided by oxidative and aerobic bacteria development through vacuum packaging combines with microbial protection effect provided by pasteurization; Thanks to the applied cold chain, a long and safe shelf life is provided and consumed. Food can be made reliable by pasteurizing at low temperatures, and even safely consumed without cracking and crunchy foods. In addition to all these advantages, sous vide technology also has some disadvantages. When the investigations are examined, it has been concluded that the research on sous vide technology's reliability in terms of health should be intensified
Balancing of U-type assembly systems using simulated annealing
The paper presents a new simulated annealing (SA)-based algorithm for the assembly line-balancing problem with a U-type configuration. The proposed algorithm employs an intelligent mechanism to search a large solution space. U-type assembly systems are becoming increasingly popular in today's modern production environments since they are more general than the traditional assembly systems. In these systems, tasks are to be allocated into stations by moving forward and backward through the precedence diagram in contrast to a typical forward move in the traditional assembly systems. The performance of the algorithm is measured by solving a large number of benchmark problems available in the literature. The results of the computational experiments indicate that the proposed SA-based algorithm performs quite effectively. It also yields the optimal solution for most problem instances. Future research directions and a comprehensive bibliography are also provided here
- …
