731 research outputs found

    Graph-homomorphic perturbations for private decentralized learning

    Get PDF
    Decentralized algorithms for stochastic optimization and learning rely on the diffusion of information through repeated local exchanges of intermediate estimates. Such structures are particularly appealing in situations where agents may be hesitant to share raw data due to privacy concerns. Nevertheless, in the absence of additional privacy-preserving mechanisms, the exchange of local estimates, which are generated based on private data can allow for the inference of the data itself. The most common mechanism for guaranteeing privacy is the addition of perturbations to local estimates before broadcasting. These perturbations are generally chosen independently at every agent, resulting in a significant performance loss. We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible (to first order in the step-size) to the network centroid, while preserving privacy guarantees. The analysis allows for general nonconvex loss functions, and is hence applicable to a large number of machine learning and signal processing problems, including deep learning

    Regularized diffusion adaptation via conjugate smoothing

    Get PDF
    The purpose of this work is to develop and study a decentralized strategy for Pareto optimization of an aggregate cost consisting of regularized risks. Each risk is modeled as the expectation of some loss function with unknown probability distribution while the regularizers are assumed deterministic, but are not required to be differentiable or even continuous. The individual, regularized, cost functions are distributed across a strongly-connected network of agents and the Pareto optimal solution is sought by appealing to a multi-agent diffusion strategy. To this end, the regularizers are smoothed by means of infimal convolution and it is shown that the Pareto solution of the approximate, smooth problem can be made arbitrarily close to the solution of the original, non-smooth problem. Performance bounds are established under conditions that are weaker than assumed before in the literature, and hence applicable to a broader class of adaptation and learning problems

    Network classifiers based on social learning

    Get PDF
    This work proposes a new way of combining independently trained classifiers over space and time. Combination over space means that the outputs of spatially distributed classifiers are aggregated. Combination over time means that the classifiers respond to streaming data during testing and continue to improve their performance even during this phase. By doing so, the proposed architecture is able to improve prediction performance over time with unlabeled data. Inspired by social learning algorithms, which require prior knowledge of the observations distribution, we propose a Social Machine Learning (SML) paradigm that is able to exploit the imperfect models generated during the learning phase. We show that this strategy results in consistent learning with high probability, and it yields a robust structure against poorly trained classifiers. Simulations with an ensemble of feedforward neural networks are provided to illustrate the theoretical results

    Social learning under inferential attacks

    Get PDF
    A common assumption in the social learning literature is that agents exchange information in an unselfish manner. In this work, we consider the scenario where a subset of agents aims at driving the network beliefs to the wrong hypothesis. The adversaries are unaware of the true hypothesis. However, they will "blend in" by behaving similarly to the other agents and will manipulate the likelihood functions used in the belief update process to launch inferential attacks. We will characterize the conditions under which the network is misled. Then, we will explain that it is possible for such attacks to succeed by showing that strategies exist that can be adopted by the malicious agents for this purpose. We examine both situations in which the agents have minimal or no information about the network model

    Contribution of microscopy for understanding the mechanism of action against trypanosomatids

    Get PDF
    Transmission electron microscopy (TEM) has proved to be a useful tool to study the ultrastructural alterations and the target organelles of new antitrypanosomatid drugs. Thus, it has been observed that sesquiterpene lactones induce diverse ultrastructural alterations in both T. cruzi and Leishmania spp., such as cytoplasmic vacuolization, appearance of multilamellar structures, condensation of nuclear DNA, and, in some cases, an important accumulation of lipid vacuoles. This accumulation could be related to apoptotic events. Some of the sesquiterpene lactones (e.g., psilostachyin) have also been demonstrated to cause an intense mitochondrial swelling accompanied by a visible kinetoplast deformation as well as the appearance of multivesicular bodies. This mitochondrial swelling could be related to the generation of oxidative stress and associated to alterations in the ergosterol metabolism. The appearance of multilamellar structures and multiple kinetoplasts and flagella induced by the sesquiterpene lactone psilostachyin C indicates that this compound would act at the parasite cell cycle level, in an intermediate stage between kinetoplast segregation and nuclear division. In turn, the diterpene lactone icetexane has proved to induce the external membrane budding on T. cruzi together with an apparent disorganization of the pericellar cytoskeleton. Thus, ultrastructural TEM studies allow elucidating the possible mechanisms and the subsequent identification of molecular targets for the action of natural compounds on trypanosomatids.Fil: Lozano, Esteban Sebastián. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Mendoza. Instituto de Medicina y Biología Experimental de Cuyo; ArgentinaFil: Spina Zapata, Renata María. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Mendoza. Instituto de Histología y Embriología de Mendoza Dr. Mario H. Burgos. Universidad Nacional de Cuyo. Facultad de Ciencias Médicas. Instituto de Histología y Embriología de Mendoza Dr. Mario H. Burgos; ArgentinaFil: Barrera, Patricia Andrea. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Mendoza. Instituto de Histología y Embriología de Mendoza Dr. Mario H. Burgos. Universidad Nacional de Cuyo. Facultad de Ciencias Médicas. Instituto de Histología y Embriología de Mendoza Dr. Mario H. Burgos; ArgentinaFil: Tonn, Carlos Eugenio. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - San Luis. Instituto de Investigaciones en Tecnología Química. Universidad Nacional de San Luis. Facultad de Química, Bioquímica y Farmacia. Instituto de Investigaciones en Tecnología Química; ArgentinaFil: Sosa Escudero, Miguel Angel. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Mendoza. Instituto de Histología y Embriología de Mendoza Dr. Mario H. Burgos. Universidad Nacional de Cuyo. Facultad de Ciencias Médicas. Instituto de Histología y Embriología de Mendoza Dr. Mario H. Burgos; Argentin

    Molecular identification of adenoviruses associated with respiratory infection in Egypt from 2003 to 2010.

    Get PDF
    BACKGROUND: Human adenoviruses of species B, C, and E (HAdV-B, -C, -E) are frequent causative agents of acute respiratory infections worldwide. As part of a surveillance program aimed at identifying the etiology of influenza-like illness (ILI) in Egypt, we characterized 105 adenovirus isolates from clinical samples collected between 2003 and 2010. METHODS: Identification of the isolates as HAdV was accomplished by an immunofluorescence assay (IFA) and confirmed by a set of species and type specific polymerase chain reactions (PCR). RESULTS: Of the 105 isolates, 42% were identified as belonging to HAdV-B, 60% as HAdV-C, and 1% as HAdV-E. We identified a total of six co-infections by PCR, of which five were HAdV-B/HAdV-C co-infections, and one was a co-infection of two HAdV-C types: HAdV-5/HAdV-6. Molecular typing by PCR enabled the identification of eight genotypes of human adenoviruses; HAdV-3 (n = 22), HAdV-7 (n = 14), HAdV-11 (n = 8), HAdV-1 (n = 22), HAdV-2 (20), HAdV-5 (n = 15), HAdV-6 (n = 3) and HAdV-4 (n = 1). The most abundant species in the characterized collection of isolates was HAdV-C, which is concordant with existing data for worldwide epidemiology of HAdV respiratory infections. CONCLUSIONS: We identified three species, HAdV-B, -C and -E, among patients with ILI over the course of 7 years in Egypt, with at least eight diverse types circulating

    Real-Time Stereopsis

    Get PDF
    The computerized development always was on the center of attraction. More accurate result generation in shorter time period has done by computers, make humans eager to givescomputers moreresponsibility and tasks. Computer vision is one of thesetasks that for past centuries take lots of timeand effortone the rodeofperfection. One of the areas in the computer vision is stereo vision or Stereopsis. This area start in early I970's and still up to day is one of the mysteries part of computer vision. Peoples try to make computer sees as human see. Up today there are manyalgorithm developed and invented by scientist but it's long way to go. The main purpose of this report is to shows how it's possible for computer to calculate depth irom 2D images and then base on some algorithms, it tries to construct 3D result. But whywe need to make all these efforts andwhy it's so important to make computer sees as human see. One ofthe strongest effects ofthis process is for 3D animation development. Generating 3D libraries and assist game developers or generally graphic developers. Imagine for development of one hour movie 60 computers work for one year and see how fast it will be to have all the models available in very short time. Another effect of this system if for virtual realities systems. Beside 3D modeling which require real-time rendering it also require certain amount oftracking for user interaction. This process normally handle by sensors, where its limited to few sensors and also it makes users uncomfortable and its harder for virtual environment to be more realistic for users

    A practical approach for outdoors distributed target localization in wireless sensor networks

    Get PDF
    Wireless sensor networks are posed as the new communication paradigm where the use of small, low-complexity, and low-power devices is preferred over costly centralized systems. The spectra of potential applications of sensor networks is very wide, ranging from monitoring, surveillance, and localization, among others. Localization is a key application in sensor networks and the use of simple, efficient, and distributed algorithms is of paramount practical importance. Combining convex optimization tools with consensus algorithms we propose a distributed localization algorithm for scenarios where received signal strength indicator readings are used. We approach the localization problem by formulating an alternative problem that uses distance estimates locally computed at each node. The formulated problem is solved by a relaxed version using semidefinite relaxation technique. Conditions under which the relaxed problem yields to the same solution as the original problem are given and a distributed consensusbased implementation of the algorithm is proposed based on an augmented Lagrangian approach and primaldual decomposition methods. Although suboptimal, the proposed approach is very suitable for its implementation in real sensor networks, i.e., it is scalable, robust against node failures and requires only local communication among neighboring nodes. Simulation results show that running an additional local search around the found solution can yield performance close to the maximum likelihood estimate

    Video streaming in urban vehicular environments: Junction-aware multipath approach

    Full text link
    © 2019 by the authors. Licensee MDPI, Basel, Switzerland. In multipath video streaming transmission, the selection of the best vehicle for video packet forwarding considering the junction area is a challenging task due to the several diversions in the junction area. The vehicles in the junction area change direction based on the different diversions, which lead to video packet drop. In the existing works, the explicit consideration of different positions in the junction areas has not been considered for forwarding vehicle selection. To address the aforementioned challenges, a Junction-Aware vehicle selection for Multipath Video Streaming (JA-MVS) scheme has been proposed. The JA-MVS scheme considers three different cases in the junction area including the vehicle after the junction, before the junction and inside the junction area, with an evaluation of the vehicle signal strength based on the signal to interference plus noise ratio (SINR), which is based on the multipath data forwarding concept using greedy-based geographic routing. The performance of the proposed scheme is evaluated based on the Packet Loss Ratio (PLR), Structural Similarity Index (SSIM) and End-to-End Delay (E2ED) metrics. The JA-MVS is compared against two baseline schemes, Junction-Based Multipath Source Routing (JMSR) and the Adaptive Multipath geographic routing for Video Transmission (AMVT), in urban Vehicular Ad-Hoc Networks (VANETs)

    Array algorithms for H^2 and H^∞ estimation

    Get PDF
    Currently, the preferred method for implementing H^2 estimation algorithms is what is called the array form, and includes two main families: square-root array algorithms, that are typically more stable than conventional ones, and fast array algorithms, which, when the system is time-invariant, typically offer an order of magnitude reduction in the computational effort. Using our recent observation that H^∞ filtering coincides with Kalman filtering in Krein space, in this chapter we develop array algorithms for H^∞ filtering. These can be regarded as natural generalizations of their H^2 counterparts, and involve propagating the indefinite square roots of the quantities of interest. The H^∞ square-root and fast array algorithms both have the interesting feature that one does not need to explicitly check for the positivity conditions required for the existence of H^∞ filters. These conditions are built into the algorithms themselves so that an H^∞ estimator of the desired level exists if, and only if, the algorithms can be executed. However, since H^∞ square-root algorithms predominantly use J-unitary transformations, rather than the unitary transformations required in the H^2 case, further investigation is needed to determine the numerical behavior of such algorithms
    corecore