190 research outputs found

    Implementing MAS agreement processes based on consensus networks

    Full text link
    [EN] Consensus is a negotiation process where agents need to agree upon certain quantities of interest. The theoretical framework for solving consensus problems in dynamic networks of agents was formally introduced by Olfati-Saber and Murray, and is based on algebraic graph theory, matrix theory and control theory. Consensus problems are usually simulated using mathematical frameworks. However, implementation using multi-agent system platforms is a very difficult task due to problems such as synchronization, distributed finalization, and monitorization among others. The aim of this paper is to propose a protocol for the consensus agreement process in MAS in order to check the correctness of the algorithm and validate the protocol. © Springer International Publishing Switzerland 2013.This work is supported by ww and PROMETEO/2008/051 projects of the Spanish government, CONSOLIDER-INGENIO 2010 under grant CSD2007-00022, TIN2012-36586-C03-01 and PAID-06-11-2084.Palomares Chust, A.; Carrascosa Casamayor, C.; Rebollo Pedruelo, M.; Gómez, Y. (2013). Implementing MAS agreement processes based on consensus networks. Distributed Computing and Artificial Intelligence. 217:553-560. https://doi.org/10.1007/978-3-319-00551-5_66S553560217Argente, E.: et al: An Abstract Architecture for Virtual Organizations: The THOMAS approach. Knowledge and Information Systems 29(2), 379–403 (2011)Búrdalo, L.: et al: TRAMMAS: A tracing model for multiagent systems. Eng. Appl. Artif. Intel. 24(7), 1110–1119 (2011)Fogués, R.L., et al.: Towards Dynamic Agent Interaction Support in Open Multiagent Systems. In: Proc. of the 13th CCIA, vol. 220, pp. 89–98. IOS Press (2010)Luck, M., et al.: Agent technology: Computing as interaction (a roadmap for agent based computing). Eng. Appl. Artif. Intel. (2005)Mailler, R., Lesser, V.: Solving distributed constraint optimization problems using cooperative mediation. In: AAMAS 2004, pp. 438–445 (2004)Olfati-Saber, R., Fax, J.A., Murray, R.M.: Consensus and cooperation in networked multi-agent systems. Proceedings of the IEEE 95(1), 215–233 (2007)Pujol-Gonzalez, M.: Multi-agent coordination: Dcops and beyond. In: Proc. of IJCAI, pp. 2838–2839 (2011)Such, J.: et al: Magentix2: A privacy-enhancing agent platform. Eng. Appl. Artif. Intel. 26(1), 96–109 (2013)Vinyals, M., et al.: Constructing a unifying theory of dynamic programming dcop algorithms via the generalized distributive law. Autonomous Agents and Multi-Agent Systems 22, 439–464 (2011

    Guest Editorial: Non-Euclidean Machine Learning

    Get PDF
    Over the past decade, deep learning has had a revolutionary impact on a broad range of fields such as computer vision and image processing, computational photography, medical imaging and speech and language analysis and synthesis etc. Deep learning technologies are estimated to have added billions in business value, created new markets, and transformed entire industrial segments. Most of today’s successful deep learning methods such as Convolutional Neural Networks (CNNs) rely on classical signal processing models that limit their applicability to data with underlying Euclidean grid-like structure, e.g., images or acoustic signals. Yet, many applications deal with non-Euclidean (graph- or manifold-structured) data. For example, in social network analysis the users and their attributes are generally modeled as signals on the vertices of graphs. In biology protein-to-protein interactions are modeled as graphs. In computer vision & graphics 3D objects are modeled as meshes or point clouds. Furthermore, a graph representation is a very natural way to describe interactions between objects or signals. The classical deep learning paradigm on Euclidean domains falls short in providing appropriate tools for such kind of data. Until recently, the lack of deep learning models capable of correctly dealing with non-Euclidean data has been a major obstacle in these fields. This special section addresses the need to bring together leading efforts in non-Euclidean deep learning across all communities. From the papers that the special received twelve were selected for publication. The selected papers can naturally fall in three distinct categories: (a) methodologies that advance machine learning on data that are represented as graphs, (b) methodologies that advance machine learning on manifold-valued data, and (c) applications of machine learning methodologies on non-Euclidean spaces in computer vision and medical imaging. We briefly review the accepted papers in each of the groups

    Dynamic Key-Value Memory Networks for Knowledge Tracing

    Full text link
    Knowledge Tracing (KT) is a task of tracing evolving knowledge state of students with respect to one or more concepts as they engage in a sequence of learning activities. One important purpose of KT is to personalize the practice sequence to help students learn knowledge concepts efficiently. However, existing methods such as Bayesian Knowledge Tracing and Deep Knowledge Tracing either model knowledge state for each predefined concept separately or fail to pinpoint exactly which concepts a student is good at or unfamiliar with. To solve these problems, this work introduces a new model called Dynamic Key-Value Memory Networks (DKVMN) that can exploit the relationships between underlying concepts and directly output a student's mastery level of each concept. Unlike standard memory-augmented neural networks that facilitate a single memory matrix or two static memory matrices, our model has one static matrix called key, which stores the knowledge concepts and the other dynamic matrix called value, which stores and updates the mastery levels of corresponding concepts. Experiments show that our model consistently outperforms the state-of-the-art model in a range of KT datasets. Moreover, the DKVMN model can automatically discover underlying concepts of exercises typically performed by human annotations and depict the changing knowledge state of a student.Comment: To appear in 26th International Conference on World Wide Web (WWW), 201

    Weighing Counts: Sequential Crowd Counting by Reinforcement Learning

    Full text link
    We formulate counting as a sequential decision problem and present a novel crowd counting model solvable by deep reinforcement learning. In contrast to existing counting models that directly output count values, we divide one-step estimation into a sequence of much easier and more tractable sub-decision problems. Such sequential decision nature corresponds exactly to a physical process in reality scale weighing. Inspired by scale weighing, we propose a novel 'counting scale' termed LibraNet where the count value is analogized by weight. By virtually placing a crowd image on one side of a scale, LibraNet (agent) sequentially learns to place appropriate weights on the other side to match the crowd count. At each step, LibraNet chooses one weight (action) from the weight box (the pre-defined action pool) according to the current crowd image features and weights placed on the scale pan (state). LibraNet is required to learn to balance the scale according to the feedback of the needle (Q values). We show that LibraNet exactly implements scale weighing by visualizing the decision process how LibraNet chooses actions. Extensive experiments demonstrate the effectiveness of our design choices and report state-of-the-art results on a few crowd counting benchmarks. We also demonstrate good cross-dataset generalization of LibraNet. Code and models are made available at: https://git.io/libranetComment: Accepted to Proc. Eur. Conf. Computer Vision (ECCV) 202

    Reversible modulation of circadian time with chronophotopharmacology

    Get PDF
    The circadian clock controls daily rhythms of physiological processes. The presence of the clock mechanism throughout the body is hampering its local regulation by small molecules. A photoresponsive clock modulator would enable precise and reversible regulation of circadian rhythms using light as a bio-orthogonal external stimulus. Here we show, through judicious molecular design and state-of-the-art photopharmacological tools, the development of a visible light-responsive inhibitor of casein kinase I (CKI) that controls the period and phase of cellular and tissue circadian rhythms in a reversible manner. The dark isomer of photoswitchable inhibitor 9 exhibits almost identical affinity towards the CKIα and CKIδ isoforms, while upon irradiation it becomes more selective towards CKIδ, revealing the higher importance of CKIδ in the period regulation. Our studies enable long-term regulation of CKI activity in cells for multiple days and show the reversible modulation of circadian rhythms with a several hour period and phase change through chronophotopharmacology

    Athletes' exposure to air pollution during World Athletics Relays: A pilot study

    Get PDF
    Potential adverse consequences of exposure to air pollutants during exercise include decreased lung function, and exacerbation of asthma and exercise-induced bronchoconstriction. These effects are especially relevant for athletes and during international competitions, as they may impact athletic performance. Thus, assessing and mitigating exposure to air pollutants during exercising should be encouraged in sports venues. A comprehensive air quality assessment was carried out during the World Relays Yokohama 2019, in the stadium and the warm-up track. The pilot included on-line and off-line instrumentation for gaseous and particulate pollutants and meteo- rological parameters, and the comparison with local reference data. Air quality perception and exacerbation of symptoms of already-diagnosed diseases (mainly respiratory and cardiovascular) were assessed by athletes by means of questionnaires during training sessions. Median NO2 concentrations inside the stadium (25.6–31.9 μgm−3) were in the range of the Yokohama urban background, evidencing the impact of urban sources (e.g., traffic) on athletes' exposure during training and competition. The assessment of hourly air pollutant trends was identified as a valuable tool to provide guidance to reduce atheletes' exposure, by identifying the periods of Inhalation Track and field Respiratory diseases World Athletics 1. Introduction Evidence supports adverse effects from short-term and long-term inhalation of air pollution to the respiratory and the cardiovascular sys- tems (Brook et al., 2002; Pietropaoli et al., 2004; Gauderman et al., 2007; de Prado Bert et al., 2018). Health impacts have been assessed for gen- eral and high-risk populations, and even for general populations performing physical activities such as walking or cycling while com- muting (de Nazelle et al., 2012; Hofman et al., 2018; Luengo-Oroz and Reis, 2019; Qiu et al., 2019; Quiros et al., 2013; Rivas et al., 2014). How- ever, research is scarce on the effects of ambient air pollution on exercis- ing athletes and their athletic performance, who may have greater than average susceptibility and exposure to air pollutants because of the physiological changes that occur during prolonged exercise (Quin et al., 2019). Specifically, there are 3 reasons why athletes are at higher risk from air pollution (McCafferty, 1981): (1) increased ventilation during exer- cise; (2) a greater fraction of air is inhaled through the mouth during ex- ercise, effectively bypassing the normal nasal filtration mechanisms; and (3) the increased airflow velocity carries pollutants deeper into the respiratory tract. Furthermore, pulmonary diffusion capacity in- creases with exercise (Turcotte et al., 1997; Stokes et al., 1981; Fisher and Cerny, 1982; Flaherty et al., 2013), increasing gaseous pollutant in- take. Nasal mucociliary clearance, impaired in long-distance runners, may also contribute to the higher susceptibility of endurance athletes given that pollutants which are normally cleared from the respiratory system, are instead absorbed (Atkinson, 1987). Even though research is scarce, studies on the relationship between air quality, athletic performance, and respiratory symptoms encourage pursuing further investigations. Lichter et al. (2015) assessed the effects of particulate air pollution on soccer players in German stadiums, re- vealing that performance was reduced under poor air quality condi- tions. Bos et al. (2011) and Quin et al. (2019) observed that the health benefits of active commuting could be negatively influenced by exercis- ing in polluted environments, while Rundell and Caviston (2008) re- ported that the acute inhalation of PM1 at concentrations in the range of many urban environments could impair exercise performance. Carlisle and Sharp (2001) and Cakmak et al. (2011) concluded that O3 was particularly damaging to athletes, with subjects achieving a lower aerobic fitness score on high ozone days. Finally, long-term exposure to outdoor air pollution may trigger intermittent endogenous airway acidification episodes indicative of pollution-related lung inflammation (Ferdinands et al., 2008). These results have particularly relevant impli- cations for top-level athletes participating in international competi- tions: the performance of athletes training in highly polluted environments may be impaired compared to athletes training in cleaner environments and, similarly, athletes used to training in cleaner envi- ronments may be adversely affected when competing in highly polluted locations. Thus, assessing exposure to air pollution in athletics venues becomes a necessity when aiming at understanding environmental drivers of both athletic performance, and athletes' health. In this framework, the aim of this study was to characterize air pol- lutant concentrations in the Yokohama stadium (in the competition and the training area) during the Yokohama 2019 World Relays the day with lowest ambient concentrations. This strategy could be adopted to define training and competition schedules, and would have special added value for athletes with respiratory conditions. Personal exposure to polycyclic aromatic hydrocarbons was quantified through wearable silicone wristbands, and showed highly var- iability across volunteers. The wristbands are a simple approach to assess personal exposure to potentially toxic organic compounds. Further research would be necessary with regard to specific air pollutants that may trigger or exacerbate respiratory conditions typical of the athlete community. The availability of high time-resolved ex- posure data in the stadiums opens up the possibility to calculate doses of specific pollutants for individual ath- letes in future athletics events, to understand the impact of environmental factors on athletic performance

    Detection of antibodies to denatured human leucocyte antigen molecules by single antigen Luminex

    Get PDF
    The anti-HLA antibody detection has been improved in sensitivity and specificity with solid-phase antigen bead (SAB) assays based on Luminex. However, false positive results due to denatured HLA (dHLA) may arise after single antigen test. The aim of this study was to compare the performance of the two Luminex technology-based anti-HLA detection kits available in the market in showing undesired anti-HLA antibody results. A prospective cohort was assessed for anti-HLA antibodies with single antigen A manufacturer (AM) kit and a comparison cohort with single antigen B manufacturer (BM) kit. A total of 11 out of 90 patients in a prospective cohort presented monospecific HLA-I antibodies with AM, and 5 out of 11 confirmed monospecific reaction with BM. Despite the confirmation of monospecific reaction with both manufacturers, 80% were assigned as dHLA reaction by specific crossmatch. Further comparative cohorts detected four out of six monospecific reactions with BM that were confirmed as possible dHLA reactions. A positive SAB test should rule out a reaction against a dHLA molecule, thus avoidance of prolonged waitlist periods or misattribution of anti-HLA reactions after transplantation

    Prognostic value of replication errors on chromosomes 2p and 3p in non-small-cell lung cancer

    Get PDF
    As chromosomes 2p and 3p are frequent targets for genomic instability in lung cancer, we have addressed whether alterations of simple (CA)n DNA repeats occur in non-small-cell lung cancer (NSCLC) at early stages. We have analysed by polymerase chain reaction (PCR) assay replication errors (RER) and loss of heterozygosity (LOH) at microsatellites mapped on chromosomes 2p and 3p in 64 paired tumour-normal DNA samples from consecutively resected stage I, II or IIIA NSCLC. DNA samples were also examined for K-ras and p53 gene mutations by PCR-single-stranded conformational polymorphism (PCR-SSCP) analysis and cyclic sequencing, as well as their relationship with clinical outcome. Forty-two of the 64 (66%) NSCLC patients showed RER at single or multiple loci. LOH was detected in 23 tumours (36%). Among patients with stage I disease, the 5-year survival rate was 80% in those whose tumours had no evidence of RER and 26% in those with RER (P = 0.005). No correlation was established between RER phenotype and LOH, K-ras or p53 mutations. RER remained a strong predictive factor (hazard ratio for death, 2.89; 95% confidence interval, 2.23-3.79; P = 0.002) after adjustment for all other evaluated factors, including p53, K-ras, LOH, histological type, tumour differentiation and TNM stage, suggesting that microsatellite instability on chromosomes 2p and 3p may play a role in NSCLC progression through a different pathway from the traditional tumour mechanisms of oncogene activation and/or tumour-suppressor gene inactivation

    Two-tier charging in Maputo Central Hospital: Costs, revenues and effects on equity of access to hospital services

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Special services within public hospitals are becoming increasingly common in low and middle income countries with the stated objective of providing higher comfort services to affluent customers and generating resources for under funded hospitals. In the present study expenditures, outputs and costs are analysed for the Maputo Central Hospital and its Special Clinic with the objective of identifying net resource flows between a system operating two-tier charging, and, ultimately, understanding whether public hospitals can somehow benefit from running Special Clinic operations.</p> <p>Methods</p> <p>A combination of step-down and bottom-up costing strategies were used to calculate recurrent as well as capital expenses, apportion them to identified cost centres and link costs to selected output measures.</p> <p>Results</p> <p>The results show that cost differences between main hospital and clinic are marked and significant, with the Special Clinic's cost per patient and cost per outpatient visit respectively over four times and over thirteen times their equivalent in the main hospital.</p> <p>Discussion</p> <p>While the main hospital cost structure appeared in line with those from similar studies, salary expenditures were found to drive costs in the Special Clinic (73% of total), where capital and drug costs were surprisingly low (2 and 4% respectively). We attributed low capital and drug costs to underestimation by our study owing to difficulties in attributing the use of shared resources and to the Special Clinic's outsourcing policy. The large staff expenditure would be explained by higher physician time commitment, economic rents and subsidies to hospital staff. On the whole it was observed that: (a) the flow of capital and human resources was not fully captured by the financial systems in place and stayed largely unaccounted for; (b) because of the little consideration given to capital costs, the main hospital is more likely to be subsidising its Special Clinic operations, rather than the other way around.</p> <p>Conclusion</p> <p>We conclude that the observed lack of transparency may create scope for an inequitable cross subsidy of private customers by public resources.</p

    Risk model for prostate cancer using environmental and genetic factors in the spanish multi-case-control (MCC) study

    Get PDF
    Prostate cancer (PCa) is the second most common cancer among men worldwide. Its etiology remains largely unknown compared to other common cancers. We have developed a risk stratification model combining environmental factors with family history and genetic susceptibility. 818 PCa cases and 1,006 healthy controls were compared. Subjects were interviewed on major lifestyle factors and family history. Fifty-six PCa susceptibility SNPs were genotyped. Risk models based on logistic regression were developed to combine environmental factors, family history and a genetic risk score. In the whole model, compared with subjects with low risk (reference category, decile 1), those carrying an intermediate risk (decile 5) had a 265% increase in PCa risk (OR = 3.65, 95% CI 2.26 to 5.91). The genetic risk score had an area under the ROC curve (AUROC) of 0.66 (95% CI 0.63 to 0.68). When adding the environmental score and family history to the genetic risk score, the AUROC increased by 0.05, reaching 0.71 (95% CI 0.69 to 0.74). Genetic susceptibility has a stronger risk value of the prediction that modifiable risk factors. While the added value of each SNP is small, the combination of 56 SNPs adds to the predictive ability of the risk model
    corecore