15 research outputs found

    Contexts for nonmonotonic RMSes

    Get PDF
    euzenat1991cInternational audienceA new kind of RMS, based on a close merge of TMS and ATMS, is proposed. It uses the TMS graph and interpretation and the ATMS multiple context labelling procedure. In order to fill in the problems of the ATMS environments in presence of nonmonotonic inferences, a new kind of environment, able to take into account hypotheses that do not hold, is defined. These environments can inherit formulas that hold as in the ATMS context lattice. The dependency graph can be interpreted with regard to these environments; so every node can be labelled. Furthermore, this leads to consider several possible interpretations of a query

    ์‹ฌ์ธตํ•™์Šต์„ ์ด์šฉํ•œ ์•ก์ฒด๊ณ„์˜ ์„ฑ์งˆ ์˜ˆ์ธก

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ)--์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› :์ž์—ฐ๊ณผํ•™๋Œ€ํ•™ ํ™”ํ•™๋ถ€,2020. 2. ์ •์—ฐ์ค€.์ตœ๊ทผ ๊ธฐ๊ณ„ํ•™์Šต ๊ธฐ์ˆ ์˜ ๊ธ‰๊ฒฉํ•œ ๋ฐœ์ „๊ณผ ์ด์˜ ํ™”ํ•™ ๋ถ„์•ผ์— ๋Œ€ํ•œ ์ ์šฉ์€ ๋‹ค์–‘ํ•œ ํ™”ํ•™์  ์„ฑ์งˆ์— ๋Œ€ํ•œ ๊ตฌ์กฐ-์„ฑ์งˆ ์ •๋Ÿ‰ ๊ด€๊ณ„๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ์˜ˆ์ธก ๋ชจํ˜•์˜ ๊ฐœ๋ฐœ์„ ๊ฐ€์†ํ•˜๊ณ  ์žˆ๋‹ค. ์šฉ๋งคํ™” ์ž์œ  ์—๋„ˆ์ง€๋Š” ๊ทธ๋Ÿฌํ•œ ๊ธฐ๊ณ„ํ•™์Šต์˜ ์ ์šฉ ์˜ˆ์ค‘ ํ•˜๋‚˜์ด๋ฉฐ ๋‹ค์–‘ํ•œ ์šฉ๋งค ๋‚ด์˜ ํ™”ํ•™๋ฐ˜์‘์—์„œ ์ค‘์š”ํ•œ ์—ญํ• ์„ ํ•˜๋Š” ๊ทผ๋ณธ์  ์„ฑ์งˆ ์ค‘ ํ•˜๋‚˜์ด๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ ์šฐ๋ฆฌ๋Š” ๋ชฉํ‘œ๋กœ ํ•˜๋Š” ์šฉ๋งคํ™” ์ž์œ  ์—๋„ˆ์ง€๋ฅผ ์›์ž๊ฐ„์˜ ์ƒํ˜ธ์ž‘์šฉ์œผ๋กœ๋ถ€ํ„ฐ ๊ตฌํ•  ์ˆ˜ ์žˆ๋Š” ์ƒˆ๋กœ์šด ์‹ฌ์ธตํ•™์Šต ๊ธฐ๋ฐ˜ ์šฉ๋งคํ™” ๋ชจํ˜•์„ ์†Œ๊ฐœํ•œ๋‹ค. ์ œ์•ˆ๋œ ์‹ฌ์ธตํ•™์Šต ๋ชจํ˜•์˜ ๊ณ„์‚ฐ ๊ณผ์ •์€ ์šฉ๋งค์™€ ์šฉ์งˆ ๋ถ„์ž์— ๋Œ€ํ•œ ๋ถ€ํ˜ธํ™” ํ•จ์ˆ˜๊ฐ€ ๊ฐ ์›์ž์™€ ๋ถ„์ž๋“ค์˜ ๊ตฌ์กฐ์  ์„ฑ์งˆ์— ๋Œ€ํ•œ ๋ฒกํ„ฐ ํ‘œํ˜„์„ ์ถ”์ถœํ•˜๋ฉฐ, ์ด๋ฅผ ํ† ๋Œ€๋กœ ์›์ž๊ฐ„ ์ƒํ˜ธ์ž‘์šฉ์„ ๋ณต์žกํ•œ ํผ์…‰ํŠธ๋ก  ์‹ ๊ฒฝ๋ง ๋Œ€์‹  ๋ฒกํ„ฐ๊ฐ„์˜ ๊ฐ„๋‹จํ•œ ๋‚ด์ ์œผ๋กœ ๊ตฌํ•  ์ˆ˜ ์žˆ๋‹ค. 952๊ฐ€์ง€์˜ ์œ ๊ธฐ์šฉ์งˆ๊ณผ 147๊ฐ€์ง€์˜ ์œ ๊ธฐ์šฉ๋งค๋ฅผ ํฌํ•จํ•˜๋Š” 6,493๊ฐ€์ง€์˜ ์‹คํ—˜์น˜๋ฅผ ํ† ๋Œ€๋กœ ๊ธฐ๊ณ„ํ•™์Šต ๋ชจํ˜•์˜ ๊ต์ฐจ ๊ฒ€์ฆ ์‹œํ—˜์„ ์‹ค์‹œํ•œ ๊ฒฐ๊ณผ, ํ‰๊ท  ์ ˆ๋Œ€ ์˜ค์ฐจ ๊ธฐ์ค€ 0.2 kcal/mol ์ˆ˜์ค€์œผ๋กœ ๋งค์šฐ ๋†’์€ ์ •ํ™•๋„๋ฅผ ๊ฐ€์ง„๋‹ค. ์Šค์บํด๋“œ-๊ธฐ๋ฐ˜ ๊ต์ฐจ ๊ฒ€์ฆ์˜ ๊ฒฐ๊ณผ ์—ญ์‹œ 0.6 kcal/mol ์ˆ˜์ค€์œผ๋กœ, ์™ธ์‚ฝ์œผ๋กœ ๋ถ„๋ฅ˜ํ•  ์ˆ˜ ์žˆ๋Š” ๋น„๊ต์  ์ƒˆ๋กœ์šด ๋ถ„์ž ๊ตฌ์กฐ์— ๋Œ€ํ•œ ์˜ˆ์ธก์— ๋Œ€ํ•ด์„œ๋„ ์šฐ์ˆ˜ํ•œ ์ •ํ™•๋„๋ฅผ ๋ณด์ธ๋‹ค. ๋˜ํ•œ, ์ œ์•ˆ๋œ ํŠน์ • ๊ธฐ๊ณ„ํ•™์Šต ๋ชจํ˜•์€ ๊ทธ ๊ตฌ์กฐ ์ƒ ํŠน์ • ์šฉ๋งค์— ํŠนํ™”๋˜์ง€ ์•Š์•˜๊ธฐ ๋•Œ๋ฌธ์— ๋†’์€ ์–‘๋„์„ฑ์„ ๊ฐ€์ง€๋ฉฐ ํ•™์Šต์— ์ด์šฉํ•  ๋ฐ์ดํ„ฐ์˜ ์ˆ˜๋ฅผ ๋Š˜์ด๋Š” ๋ฐ ์šฉ์ดํ•˜๋‹ค. ์›์ž๊ฐ„ ์ƒํ˜ธ์ž‘์šฉ์— ๋Œ€ํ•œ ๋ถ„์„์„ ํ†ตํ•ด ์ œ์•ˆ๋œ ์‹ฌ์ธตํ•™์Šต ๋ชจํ˜• ์šฉ๋งคํ™” ์ž์œ  ์—๋„ˆ์ง€์— ๋Œ€ํ•œ ๊ทธ๋ฃน-๊ธฐ์—ฌ๋„๋ฅผ ์ž˜ ์žฌํ˜„ํ•  ์ˆ˜ ์žˆ์Œ์„ ์•Œ ์ˆ˜ ์žˆ์œผ๋ฉฐ, ๊ธฐ๊ณ„ํ•™์Šต์„ ํ†ตํ•ด ๋‹จ์ˆœํžˆ ๋ชฉํ‘œ๋กœ ํ•˜๋Š” ์„ฑ์งˆ๋งŒ์„ ์˜ˆ์ธกํ•˜๋Š” ๊ฒƒ์„ ๋„˜์–ด ๋”์šฑ ์ƒ์„ธํ•œ ๋ฌผ๋ฆฌํ™”ํ•™์  ์ดํ•ด๋ฅผ ํ•˜๋Š” ๊ฒƒ์ด ๊ฐ€๋Šฅํ•  ๊ฒƒ์ด๋ผ ๊ธฐ๋Œ€ํ•  ์ˆ˜ ์žˆ๋‹ค.Recent advances in machine learning technologies and their chemical applications lead to the developments of diverse structure-property relationship based prediction models for various chemical properties; the free energy of solvation is one of them and plays a dominant role as a fundamental measure of solvation chemistry. Here, we introduce a novel machine learning-based solvation model, which calculates the target solvation free energy from pairwise atomistic interactions. The novelty of our proposed solvation model involves rather simple architecture: two encoding function extracts vector representations of the atomic and the molecular features from the given chemical structure, while the inner product between two atomistic features calculates their interactions, instead of black-boxed perceptron networks. The cross-validation result on 6,493 experimental measurements for 952 organic solutes and 147 organic solvents achieves an outstanding performance, which is 0.2 kcal/mol in MUE. The scaffold-based split method exhibits 0.6 kcal/mol, which shows that the proposed model guarantees reasonable accuracy even for extrapolated cases. Moreover, the proposed model shows an excellent transferability for enlarging training data due to its solvent-non-specific nature. Analysis of the atomistic interaction map shows there is a great potential that our proposed model reproduces group contributions on the solvation energy, which makes us believe that the proposed model not only provides the predicted target property, but also gives us more detailed physicochemical insights.1. Introduction 1 2. Delfos: Deep Learning Model for Prediction of Solvation Free Energies in Generic Organic Solvents 7 2.1. Methods 7 2.1.1. Embedding of Chemical Contexts 7 2.1.2. Encoder-Predictor Network 9 2.2. Results and Discussions 13 2.2.1. Computational Setup and Results 13 2.2.2. Transferability of the Model for New Compounds 17 2.2.3. Visualization of Attention Mechanism 26 3. Group Contribution Method for the Solvation Energy Estimation with Vector Representations of Atom 29 3.1. Model Description 29 3.1.1. Word Embedding 29 3.1.2. Network Architecture 33 3.2. Results and Discussions 39 3.2.1. Computational Details 39 3.2.2. Prediction Accuracy 42 3.2.3. Model Transferability 44 3.2.4. Group Contributions of Solvation Energy 49 4. Empirical Structure-Property Relationship Model for Liquid Transport Properties 55 5. Concluding Remarks 61 A. Analyzing Kinetic Trapping as a First-Order Dynamical Phase Transition in the Ensemble of Stochastic Trajectories 65 A1. Introduction 65 A2. Theory 68 A3. Lattice Gas Model 70 A4. Mathematical Model 73 A5. Dynamical Phase Transitions 75 A6. Conclusion 82 B. Reaction-Path Thermodynamics of the Michaelis-Menten Kinetics 85 B1. Introduction 85 B2. Reaction Path Thermodynamics 88 B3. Fixed Observation Time 94 B4. Conclusions 101Docto

    Revisiting the Current Issues IN Multilevel Structural Equation Modeling (MSEM): The Application of Sampling Weights and the Test of Measurement Invariance in MSEM

    Get PDF
    Multilevel structural equation modeling (MSEM) has been widely used throughout the applied social and behavioral sciences. This dissertation revisited current issues in MSEM, including: the application of sampling weights and the test of measurement invariance. The impact of using sampling weights on testing multilevel mediation effects in large-scale, complex survey data was evaluated in Study 1. This study compared design-based, weighted design-based, model-based, and weighted model-based approaches in a noninformative sampling design. First, results showed that the model-based approaches produced unbiased indirect effect estimates and smaller standard errors. Second, ignoring sampling weights led to substantial bias in the design-based approaches. Finally, in the model-based approaches, weighted parameter estimates and standard errors differed moderately from unweighted results. The model-based approaches were thereby suggested for testing multilevel mediation effects in large-scale, complex survey data. In addition, researchers were always encouraged to apply sampling weights in analysis. The advantages of applying sampling weights in model-based approach were less obvious when cluster sizes were large, and particularly when ICC was small. The pursuit of evaluating various goodness-of-fit indices for testing measurement invariance has been a focus over the past decade. Study 2 expanded the investigation in MSEM. ICC and between-group difference accounted for a large proportion of variance in the model fit change. Among five model fit indices investigated in this study (i.e., X^2, CFI, RMSEA, SRMR, and TLI), ฮ”CFI and ฮ”SRMR in the level-specific approach had identical results to that of the standard approach. ฮ”SRMRB appeared to be the most sensitive to noninvariant factor loadings among all criteria. ฮ”SRMRB performed equally well in examining lack of intercept invariance when between-group difference was large. ฮ”RMSEA was less sensitive. Fractional changes in ฮ”CFI and ฮ”TLI indicated that neither was sensitive regardless of the level-specific approach or the standard approach. ฮ”X^2 was able to detect noninvariant intercepts when between-group difference was large, whereas only detected noninvariant factor loadings when both ICC and between-group difference were large. In conclusion, level-specific ฮ”SRMRB was suggested as a major index for examining between-level factor loading and intercept invariance in MSEM. ฮ”X^2 can be a supplementary index

    Value Measurement for New Product Category: a Conjoint Approach to Eliciting Value Structure

    Get PDF
    Ability to measure value from the customer\u27s point of view is central to the determination of market offerings: Customers will only buy the equivalent of perceived value, and companies can only offer benefits that cost less to provide than customers are willing to pay. Conjoint analysis is the most popular individual-level value measurement method to determine relative impact of product or service attributes on preferences and other dependent variables. This research focuses on how value measurement can be made more accurate and more reliable by measuring the relative influence of selected methodological variations on performance in prediction and on stability of value structure, and by grouping customers with similar value structure into segments which respond to product stimuli in a similar manner. Influences of the type of attributes included in the conjoint task, of the factorial design used to construct the product profiles, of the type and form of model, of the time of measurement, and of the type of cluster-based segmentation method, are evaluated. Data was gathered with a questionnaire that controlled for methodological variations, and with a notebook computer as the measurement object. One repeated measurement was taken. The study was conducted in two phases. In Phase I, influences of methodological variations on accuracy in prediction and on respective value structure were examined. In Phase II, different cluster-based segmentation methods--hierarchical clustering (HIC), non-hierarchical clustering (NHC), and fuzzy c-means clustering (FUC)--and according conjoint models were evaluated for their performance in prediction and in comparison with individual-level conjoint models. Results show the best models for a variety of design parameters are traditional individual-level, main-effects-only conjoint models. Neither modeling of interactions, nor segment-level conjoint models were able to improve on prediction. Best segment-level conjoint models were obtained with a fuzzy clustering method, worst models were obtained with k-means and the most fuzzy clustering approach. In conclusion, conjoint analysis reveals itself as a reliable method to measure individual customer value. It seems more rewarding for improvement of accuracy in prediction to apply repeated measures, or gather additional data about the respondent, than to attempt improvement on methodological variations with a single measurement

    Ranking to Learn and Learning to Rank: On the Role of Ranking in Pattern Recognition Applications

    Get PDF
    The last decade has seen a revolution in the theory and application of machine learning and pattern recognition. Through these advancements, variable ranking has emerged as an active and growing research area and it is now beginning to be applied to many new problems. The rationale behind this fact is that many pattern recognition problems are by nature ranking problems. The main objective of a ranking algorithm is to sort objects according to some criteria, so that, the most relevant items will appear early in the produced result list. Ranking methods can be analyzed from two different methodological perspectives: ranking to learn and learning to rank. The former aims at studying methods and techniques to sort objects for improving the accuracy of a machine learning model. Enhancing a model performance can be challenging at times. For example, in pattern classification tasks, different data representations can complicate and hide the different explanatory factors of variation behind the data. In particular, hand-crafted features contain many cues that are either redundant or irrelevant, which turn out to reduce the overall accuracy of the classifier. In such a case feature selection is used, that, by producing ranked lists of features, helps to filter out the unwanted information. Moreover, in real-time systems (e.g., visual trackers) ranking approaches are used as optimization procedures which improve the robustness of the system that deals with the high variability of the image streams that change over time. The other way around, learning to rank is necessary in the construction of ranking models for information retrieval, biometric authentication, re-identification, and recommender systems. In this context, the ranking model's purpose is to sort objects according to their degrees of relevance, importance, or preference as defined in the specific application.Comment: European PhD Thesis. arXiv admin note: text overlap with arXiv:1601.06615, arXiv:1505.06821, arXiv:1704.02665 by other author

    Ranking to Learn and Learning to Rank: On the Role of Ranking in Pattern Recognition Applications

    Get PDF
    The last decade has seen a revolution in the theory and application of machine learning and pattern recognition. Through these advancements, variable ranking has emerged as an active and growing research area and it is now beginning to be applied to many new problems. The rationale behind this fact is that many pattern recognition problems are by nature ranking problems. The main objective of a ranking algorithm is to sort objects according to some criteria, so that, the most relevant items will appear early in the produced result list. Ranking methods can be analyzed from two different methodological perspectives: ranking to learn and learning to rank. The former aims at studying methods and techniques to sort objects for improving the accuracy of a machine learning model. Enhancing a model performance can be challenging at times. For example, in pattern classification tasks, different data representations can complicate and hide the different explanatory factors of variation behind the data. In particular, hand-crafted features contain many cues that are either redundant or irrelevant, which turn out to reduce the overall accuracy of the classifier. In such a case feature selection is used, that, by producing ranked lists of features, helps to filter out the unwanted information. Moreover, in real-time systems (e.g., visual trackers) ranking approaches are used as optimization procedures which improve the robustness of the system that deals with the high variability of the image streams that change over time. The other way around, learning to rank is necessary in the construction of ranking models for information retrieval, biometric authentication, re-identification, and recommender systems. In this context, the ranking model's purpose is to sort objects according to their degrees of relevance, importance, or preference as defined in the specific application.Comment: European PhD Thesis. arXiv admin note: text overlap with arXiv:1601.06615, arXiv:1505.06821, arXiv:1704.02665 by other author

    Nonclassicality detection and communication bounds in quantum networks

    Get PDF
    Quantum information investigates the possibility of enhancing our ability to process and transmit information by directly exploiting quantum mechanical laws. When searching for improvement opportunities, one typically starts by assessing the range of outcomes classically attainable, and then investigates to what extent control over the quantum features of the system could be helpful, as well as the best performance that could be achieved. In this thesis we provide examples of these aspects, in linear optics, quantum metrology, and quantum communication. We start by providing a criterion able to certify whether the outcome of a linear optical evolution cannot be explained by the classical wave-like theory of light. We do so by identifying a tight lower bound on the amount of correlations that could be detected among output intensities, when classical electrodynamics theory is used to describe the fields. Rather than simply detecting nonclassicality, we then focus on its quantification. In particular, we consider the characterisation of the amount of squeezing encoded on selected quantum probes by an unknown external device, without prior information on the direction of application. We identify the single-mode Gaussian probes leading to the largest average precision in noiseless and noisy conditions, and discuss the advantages arising from the use of correlated two-mode probes. Finally, we improve current bounds on the ultimate performance attainable in a quantum communication scenario. Specifically, we bound the number of maximally entangled qubits, or private bits, shared by two parties after a communication protocol over a quantum network, without restrictions on their classical communication. As in previous investigations, our approach is based on the evaluation of the maximum amount of entanglement that could be generated by the channels in the network, but it includes the possibility of changing entanglement measure on a channel-by-channel basis. Examples where this is advantageous are discussed.Open Acces

    Applications of state dependent models in finance

    Get PDF
    The State-Dependent Model (SDM) prescribes specific types of nonlinearity with linearity as special cases and can identify structural breaks within a time series. In a simulation study of various time series models, the SDM technique was able to capture the true type of linearity/non-linearity in the data. This thesis makes among the first attempts to apply the SDM to business, economics, and financial data. Its application to business cycle indicators suggests the presence of significant nonlinearity in most industrial production sectors, but the results are inconclusive in terms of symmetric or asymmetric nonlinearity. The SDM was also used to test Purchasing Power Parity. The study found that the real exchange rates (against the US dollar) for the Pound, Euro, Yen, are globally mean reverting with ESTAR characteristics, Brazilian Real as random walk, and that the PPP holds consistently for GBP/USD and JPY/USD. Additional analysis indicated that the higher the uncertainty level, the higher the degree of mean-reverting these real exchange rates have, and uncertainty events result in instantaneous shocks in real exchange rates before mean reversion took place. Finally, the forecasting performance of the SDM models was investigated and compared with the linear ARIMA, ETS, and Neural Network Autoregressive models. Employing two sets of real data, the study found that the SDM models possess superior forecasting ability in long-term forecasts for industrial production and Japanese tourism data
    corecore