33 research outputs found

    Algorithms for Estimating Trends in Global Temperature Volatility

    Full text link
    Trends in terrestrial temperature variability are perhaps more relevant for species viability than trends in mean temperature. In this paper, we develop methodology for estimating such trends using multi-resolution climate data from polar orbiting weather satellites. We derive two novel algorithms for computation that are tailored for dense, gridded observations over both space and time. We evaluate our methods with a simulation that mimics these data's features and on a large, publicly available, global temperature dataset with the eventual goal of tracking trends in cloud reflectance temperature variability.Comment: Published in AAAI-1

    Collection and classification of services and their context

    Get PDF
    SOA provides new means for interoperability of business logic and flexible integration of independent systems by introducing and promoting Web Services. Since its introduction in the previous decade, it has gained a lot of attraction through industry and researchers. However, there are many problems which this novel idea of SOA encounters. One of the initial problems is finding Web Services by the service consumers. Initial design of SOA proposed a service registry between the consumers and providers but in practice, it was not respected and accepted in the industry and service providers are not registering their services. Many SOA researches assume that such registry exists but, a repository of services is preliminary to the research. The Internet is filled with many Web Services which are being published every day by different entities and individuals such as companies, public institutions, universities and private developers. Due to the nature of search engines to support all kinds of information, it is difficult for the service consumers to employ them to find their desired services fast and to restrict search results to Web Services. Vertical search engines which focus on Web Services are proposed to be specialized in searching Web Services. Another solution proposed is to use the notion of Brokerage in order to assist the service consumers to find and choose their desired services. A main requirement in both of these solutions is to have a repository of Web Services. In this thesis we exploit methodologies to find services and to create this repository. We survey and harvest three main type of service descriptions: WSDL, WADL, and Web pages describing RESTful services. In this effort, we extract the data from previous known repositories, we query search engines and we use Web Crawlers to find these descriptions. In order to increase the effectiveness and speed up the task of finding compatible Web Services in the Brokerage when performing service composition or suggesting Web Services to the requests, high-level functionality of the service needs to be determined. Due to the lack of structured support for specifying such functionality, classification of services into a set of abstract categories is necessary. In this thesis we exploit automatic classification of the Web Service descriptions which we harvest. We employ a wide range of Machine Learning and Signal Processing algorithms and techniques in order to find the highest precision achievable in the scope of this thesis for classification of each type of service description. In addition, we complement our approach by showing the importance and effect of contextual information on the classification of the service descriptions and show that it improves the precision. In order to achieve this goal, we gather and store contextual information related to the service descriptions from the sources to the extent of this thesis. Finally, the result of this effort is a repository of classified service descriptions

    Prevalence of Common Aeroallergens in Patients with Allergic Rhinitis in Gorgan, North of Iran, Based on Skin Prick Test Reactivity

    Get PDF
    Background Allergic rhinitis is one of the most common types of rhinitis. Allergen avoidance is the most important way of preventing this disease. The present study is carried out to determine the frequency of common aeroallergens in patients with allergic rhinitis in Gorgan city by skin prick test (SPT) reactivity. Materials and Methods  In this cross-sectional study 270 patients referring to the Asthma and Allergic Center in Gorgan city, Iran, were enrolled. Diagnosis of allergic rhinitis was confirmed by specialist asthma and allergy. A questionnaire containing demographic data and patient’s history was completed. Skin prick test containing standard allergen extracts, histamine, and physiologic serum was performed on patients. The data were analyzed using SPSS software version16.0. Results: In the present study, 270 patients (113 males and 157 females) had perennial allergic rhinitis (PAR), seasonal allergic rhinitis (SAR), and mixed allergic rhinitis (MAR) (n=166, 54, 47, receptivity). Out of these patients, the most common aeroallergens was a house dust mite called Dermatophagoides pteronyssinus (43.7%), other common allergen were: weeds (40.7%), Dermatophagoides farinae (40.4%), grasses (32.5%), beetles (30%), trees (22.5%), and molds (16.3%). There was a significant relationship between prevalence of allergy to grasses and gender (P=0.016), weeds and age (

    Learning to Allocate Limited Time to Decisions with Different Expected Outcomes

    Full text link
    The goal of this article is to investigate how human participants allocate their limited time to decisions with different properties. We report the results of two behavioral experiments. In each trial of the experiments, the participant must accumulate noisy information to make a decision. The participants received positive and negative rewards for their correct and incorrect decisions, respectively. The stimulus was designed such that decisions based on more accumulated information were more accurate but took longer. Therefore, the total outcome that a participant could achieve during the limited experiments’ time depended on her “decision threshold”, the amount of information she needed to make a decision. In the first experiment, two types of trials were intermixed randomly: hard and easy. Crucially, the hard trials were associated with smaller positive and negative rewards than the easy trials. A cue presented at the beginning of each trial would indicate the type of the upcoming trial. The optimal strategy was to adopt a small decision threshold for hard trials. The results showed that several of the participants did not learn this simple strategy. We then investigated how the participants adjusted their decision threshold based on the feedback they received in each trial. To this end, we developed and compared 10 computational models for adjusting the decision threshold. The models differ in their assumptions on the shape of the decision thresholds and the way the feedback is used to adjust the decision thresholds. The results of Bayesian model comparison showed that a model with time-varying thresholds whose parameters are updated by a reinforcement learning algorithm is the most likely model. In the second experiment, the cues were not presented. We showed that the optimal strategy is to use a single time-decreasing decision threshold for all trials. The results of the computational modeling showed that the participants did not use this optimal strategy. Instead, they attempted to detect the difficulty of the trial first and then set their decision threshold accordingly
    corecore