1,820 research outputs found

    Learning to Transform Time Series with a Few Examples

    Get PDF
    We describe a semi-supervised regression algorithm that learns to transform one time series into another time series given examples of the transformation. This algorithm is applied to tracking, where a time series of observations from sensors is transformed to a time series describing the pose of a target. Instead of defining and implementing such transformations for each tracking task separately, our algorithm learns a memoryless transformation of time series from a few example input-output mappings. The algorithm searches for a smooth function that fits the training examples and, when applied to the input time series, produces a time series that evolves according to assumed dynamics. The learning procedure is fast and lends itself to a closed-form solution. It is closely related to nonlinear system identification and manifold learning techniques. We demonstrate our algorithm on the tasks of tracking RFID tags from signal strength measurements, recovering the pose of rigid objects, deformable bodies, and articulated bodies from video sequences. For these tasks, this algorithm requires significantly fewer examples compared to fully-supervised regression algorithms or semi-supervised learning algorithms that do not take the dynamics of the output time series into account

    Entanglement Entropy in a Non-Conformal Background

    Full text link
    We use gauge-gravity duality to compute entanglement entropy in a non-conformal background with an energy scale Λ\Lambda. At zero temperature, we observe that entanglement entropy decreases by raising Λ\Lambda. However, at finite temperature, we realize that both ΛT\frac{\Lambda}{T} and entanglement entropy rise together. Comparing entanglement entropy of the non-conformal theory, SA(N)S_{A(N)}, and of its conformal theory at the UVUV limit, SA(C) S_{A(C)}, reveals that SA(N)S_{A(N)} can be larger or smaller than SA(C)S_{A(C)}, depending on the value of ΛT\frac{\Lambda}{T}.Comment: 5 pages, 3 figures, published versio

    Comparison of Piezosurgery and Conventional Hand-pieces in Open Sinus Lifting Surgery

    Get PDF
    Based on the ultrasonic principle, the piezoelectric transducer produced moderated frequencies in the medical and industrial sciences. Selectable cuttings can be selected by adjusting the frequency only on mineralized tissues, including the highlights of this device. In many of the precise and sensitive surgeries that the site is close to the important anatomical structure, this property is used to facilitate and secure the surgery. Sinus surgeon surgery is performed in cases where there is no sufficient bone for implant support in the posterior maxilla. It is essential to success the implant and the reconstruction of the dental system.Purpose: The aim of this study was to compare the intra-operative and post-operative effects of piezosurgery and conventional rotative instruments in the open-sided sinus lifting procedure.Materials and methods: In this cross-sectional study 23 patients requiring direct sinus lifting were enrolled. The osteotomy and sinus membrane elevation were performed either with piezosurgery tips or rotative diamond and carbide burs and hand-piece membrane elevators. Time elapsed between bony window opening and completion of membrane elevation (duration), incidence of membrane perforation, visibility of the operation site, postoperative pain and swelling were evaluated.Results: There was no significant difference between piezosurgery and conventional groups regarding incidence of membrane perforation and post-operative pain (P0.05). However, there were significantly more duration, postoperative inflation and poor operation site visibility and access in the piezosurgery group compared with the hand-piece group (P0.05)

    Evaluation of Hard and Soft Tissue Changes Due to Implant Surgery First Phase with or without Using Topical Antibiotic (Tetracycline) Applying under Screw Cover

    Get PDF
    Despite the fact that today the process of implant treatment is introduced as a common oral and dental treatment, it is therefore necessary for a surgeon and dentist to provide a specified umbilical cord to prescribe antibiotics for patients with this treatment plan. Bacterial agents have a direct relationship with early failure of dental implants. One of the main causes for possible infections until the second surgery and the threat to the success of the dead space implant among the screw cover, the interior of the implant fixture, the presence of infectious and infectious agents, including saliva and blood there. Also, these cases have led some surgeons to use a 1% prophylactic ointment in the area of the internal surface of the fixture and under the cover of the screw to prevent possible complications.Purpose: The aim of this study was to determine the relationship between the use of 1% ocular tetracycline ointment in the treatment of dental implants and the reduction of fistula and inflammation, as well as the reduction of bone erosion in subsequent follow-up.Materials and methods: In the case of fistula and inflammation of the data, a clinical observation was carried out by examination by the surgeon to examine the indices of cervical sinus and redness of the tissue. In the context of bone erosion, pre-apical radiographs of patients were calculated approximately 3 months after processing photos in the Photoshop program with a digital caliper with a precision of 0.01 mm.Results: The result of this study was not statistically significant in relation to bone erosion (p0.05). Regarding the study of inflammation and fistula, also it was not statistically significant (p0.05). Another one was the ease of opening the screw cover in the test group, which is considered an advantage. But since the overflow or lack of flow of the screw cover bacteria has not been proven and not to be significant of data in this study, protocols cannot be used for the use or non-use of topical tetracycline

    Selection of Wood Supply Contracts to Reduce Cost in the Presence of Risks in Procurement Planning

    Get PDF
    Les activités d'achat dans l'industrie des pâtes et papiers représentent une part importante du coût global de la chaîne d'approvisionnement. Les décideurs prévoient l'approvisionnement en bois requis jusqu'à un an à l'avance afin de garantir le volume d'approvisionnement pour le processus de production en continu dans leur usine. Des contrats réguliers, flexibles et d'options avec des fournisseurs de différents groupes sont disponibles. Les fournisseurs sont regroupés en fonction de caractéristiques communes, telles que la propriété des terres forestières. Cependant, lors de l'exécution du plan, des risques affectent les opérations d'approvisionnement. Si les risques ne sont pas intégrés dans le processus de planification des achats, l'atténuation de leur impact sera generalement coûteuse et compliquée. Des contrats ad hoc coûteux supplémentaires pourraient être nécessaires pour compenser le manque de livraisons. Pour aborder ce problème dans cette thèse, dans un premier projet, un modèle mathématique déterministe des opérations d'approvisionnement est développé. L'objectif du modèle est de proposer un plan d'approvisionnement annuel pour minimiser le coût total des opérations relatives. Les opérations sont soumises à des contraintes telles qu’une proportion minimale de l'offre par chaque groupe de fournisseurs, des niveaux cibles des stocks, de la satisfaction de la demande, la capacité par la cour à bois et la capacité du procédé de mise en copeaux. Les décisions sont liées à la sélection des contrats d'approvisionnement, à l'ouverture de cour à bois et aux flux du bois. Dans un deuxième projet, une évaluation du plan d'approvisionnement à partir du modèle déterministe du premier projet est effectuée en utilisant une approche de simulation Monte Carlo. Trois stratégies contractuelles différentes sont comparées : fixes, flexibles et une combinaison des deux types des contrats. L'approche de simulation de ce projet évalue la performance du plan par la valeur attendue et la variabilité du coût total, lorsque le plan est exécuté pendant l'horizon de planification. Dans un troisième projet, une approche de programmation stochastique en deux étapes est utilisée pour fournir un plan d'approvisionnement fiable. L'objectif du modèle est de minimiser le coût prévu du plan d'approvisionnement en présence de différents scénarios générés en fonction des risques. Les décisions lors de la première étape sont la sélection des contrats dans la première période et l'ouverture des cours à bois. Les décisions de la deuxième étape concernent la sélection des contrats commençant après la première période, les flux, l'inventaire et la production du procédé de la mise en copeaux. iii L'étude de cas utilisée dans cette thèse est inspirée par Domtar, une entreprise des pâtes et papiers située au Québec, Canada. Les résultats des trois projets de cette thèse aident les décideurs à réduire les contraintes humaines liées à la planification complexe des achats. Les modèles mathématiques développés fournissent une base pour l'évaluation de la stratégie d'approvisionnement sélectionnée. Cette tâche est presque impossible avec les approches actuelles de l'entreprise, car les évaluations nécessitent la formulation de risques d'approvisionnement. L'approche de programmation stochastique montre de meilleurs résultats financiers par rapport à la planification déterministe, avec une faible variabilité dans l'atténuation de l'impact des risques.Procurement activities in the pulp and paper industry account for an important part of the overall supply chain cost. Procurement decision-makers plan for the required wood supply up to one year in advance to guarantee the supply volume for the continuous production process at their mill. Regular, flexible and option contracts with suppliers in different groups are available. Suppliers are grouped based on common characteristics such as forestland ownership. However, during the execution of the plan, sourcing risks affect procurement operations. If risks are not integrated into the procurement planning process, mitigating their impact is likely to be expensive and complicated. Additional expensive ad hoc contracts might be required to compensate for the lack of deliveries. To tackle this problem, the first project of this thesis demonstrates the development of a deterministic mathematical model of procurement operations. The objective of the model is to propose an annual procurement plan to minimize the total cost of procurement operations. The operations are subject to constraints such as the minimum share of supply for each group of suppliers, inventory target levels, demand, woodyard capacity, and chipping process capacity. The decisions are related to the selection of sourcing contracts, woodyards opening, and wood supply flow. In the second project, an evaluation of the procurement plan from the deterministic model from project one is performed by using a Monte Carlo simulation approach. Three different strategies are compared as fixed, flexible, and a mix of both contracts. The simulation approach in this project evaluates the performance of the plan by the expected value and variability of the total cost when the plan is executed during the planning horizon. In the third project, a two-stage stochastic programming approach is used to provide a reliable procurement plan. The objective of the model is to minimize the expected cost of the procurement plan in the presence of different scenarios generated based on sourcing risks. First-stage decisions are the selection of contracts in the first period and the opening of woodyards. Second-stage decisions concern the selection of contracts starting after the first period, flow, inventory, and chipping process production. The case study used in this thesis was inspired by Domtar, which is a pulp and paper company located in Quebec, Canada. The results of three projects in this doctoral dissertation support decision-makers to reduce the human limitation in performing complicated procurement planning. The developed mathematical models provide a basis to evaluate the selected procurement strategy. This task is nearly impossible with current approaches in the company, as the evaluations require the formulation of v sourcing risks. The stochastic programming approach shows better financial results comparing to deterministic planning, with low variability in mitigating the impact of risks

    Towards adaptive deep model-based reinforcement learning

    Full text link
    L'une des principales caractéristiques comportementales utilisées en neurosciences afin de déterminer si le sujet d'étude --- qu'il s'agisse d'un rongeur ou d'un humain --- démontre un apprentissage basé sur un modèle (model-based) est une adaptation efficace aux changements locaux de l'environnement. Dans l'apprentissage par renforcement (RL), cependant, nous démontrons, en utilisant une version améliorée de la configuration d'adaptation au changement local (LoCA) récemment introduite, que les méthodes bien connues d'apprentissage par renforcement basées sur un modèle (MBRL) telles que PlaNet et DreamerV2 présentent un déficit dans leur capacité à s'adapter aux changements environnementaux locaux. En combinaison avec des travaux antérieurs qui ont fait une observation similaire sur l'autre méthode populaire basée sur un modèle, MuZero, une tendance semble émerger, suggérant que les méthodes MBRL profondes actuelles ont de sérieuses limites. Nous approfondissons les causes de ces mauvaises performances en identifiant les éléments qui nuisent au comportement adaptatif et en les reliant aux techniques sous-jacentes fréquemment utilisées dans la RL basée sur un modèle profond, à la fois en matière d'apprentissage du modèle mondial et de la routine de planification. Nos résultats démontrent qu'une exigence particulièrement difficile pour les méthodes MBRL profondes est qu'il est difficile d'atteindre un modèle mondial suffisamment précis dans toutes les parties pertinentes de l'espace d'état en raison de l'oubli catastrophique. Et tandis qu'un tampon de relecture peut atténuer les effets de l'oubli catastrophique, un tampon de relecture traditionnel premier-entré-premier-sorti empêche une adaptation efficace en raison du maintien de données obsolètes. Nous montrons qu'une variante conceptuellement simple de ce tampon de relecture traditionnel est capable de surmonter cette limitation. En supprimant uniquement les échantillons du tampon de la région locale des échantillons nouvellement observés, des modèles de monde profond peuvent être construits qui maintiennent leur précision dans l'espace d'état, tout en étant capables de s'adapter efficacement aux changements locaux de la fonction de récompense. Nous démontrons qu’en appliquant notre variation de tampon de relecture à une version profonde de la méthode Dyna classique, ainsi qu'à des méthodes récentes telles que PlaNet et DreamerV2, les méthodes basées sur des modèles profonds peuvent également s'adapter efficacement aux changements locaux de l'environnement.One of the key behavioral characteristics used in neuroscience to determine whether the subject of study---be it a rodent or a human---exhibits model-based learning is effective adaptation to local changes in the environment. In reinforcement learning (RL), however, we demonstrate, using an improved version of the recently introduced Local Change Adaptation (LoCA) setup, that well-known model-based reinforcement learning (MBRL) methods such as PlaNet and DreamerV2 perform poorly in their ability to adapt to local environmental changes. Combined with prior work that made a similar observation about the other popular model-based method, MuZero, a trend appears to emerge, suggesting that current deep MBRL methods have serious limitations. We dive deeper into the causes of this poor performance by identifying elements that hurt adaptive behavior and linking these to underlying techniques frequently used in deep model-based RL, both in terms of learning the world model and the planning routine. Our findings demonstrate that one particularly challenging requirement for deep MBRL methods is that attaining a world model that is sufficiently accurate throughout relevant parts of the state-space is challenging due to catastrophic forgetting. And while a replay buffer can mitigate the effects of catastrophic forgetting, the traditional first-in-first-out replay buffer precludes effective adaptation due to maintaining stale data. We show that a conceptually simple variation of this traditional replay buffer is able to overcome this limitation. By removing only samples from the buffer from the local neighbourhood of the newly observed samples, deep world models can be built that maintain their accuracy across the state-space, while also being able to effectively adapt to local changes in the reward function. We demonstrate this by applying our replay-buffer variation to a deep version of the classical Dyna method, as well as to recent methods such as PlaNet and DreamerV2, demonstrating that deep model-based methods can adapt effectively as well to local changes in the environment

    Unsupervised Regression with Applications to Nonlinear System Identification

    Get PDF
    We derive a cost functional for estimating the relationship between high-dimensional observations and the low-dimensional process that generated them with no input-output examples. Limiting our search to invertible observation functions confers numerous benefits, including a compact representation and no suboptimal local minima. Our approximation algorithms for optimizing this cost functional are fast and give diagnostic bounds on the quality of their solution. Our method can be viewed as a manifold learning algorithm that utilizes a prior on the low-dimensional manifold coordinates. The benefits of taking advantage of such priors in manifold learning and searching for the inverse observation functions in system identification are demonstrated empirically by learning to track moving targets from raw measurements in a sensor network setting and in an RFID tracking experiment
    • …
    corecore