226 research outputs found

    Darbuotojo sveikatai padarytos žalos atlyginimo sistema.

    Get PDF
    Compensation of damage done to health is not simply of compensational nature, it is an important guarantor of the social rights of the injured as well. The peculiarities of legal regulation of the compensation of damage done to the health of the employee are determined by the nature of the damage done to health. Damage done to health may be compensated in several ways, and such methods of compensation are usually regulated by separate legal acts. In the Republic of Lithuania, the compensation of such damage is regulated by the Law on Social Insurance of Occupational Accidents and Occupational Diseases, the Interim Law on Compensation of Damage due to Occupational Accidents and Occupational Diseases, the Labour Code and the Civil Code. The damage done to the health of the employee may be compensated by the public social insurance against occupational accidents and occupational diseases. However, such social insurance does not cover all of the damage done to the employee, it only reimburses the lost earnings. Compensation of damage in the form of social insurance in no way coincides with the material liability of the guilty person with respect to the conditions for the compensation of such damage as well. For such reasons, by differentiating the legal regulation of damage done to the health of the employee, it is possible to deviate from the compensational nature of the redress of damage, and to infringe the principle of complete compensation of damage. The article analyses problems of interaction of various methods of compensation of damage and proposes possible solutions to such problems. By way of systematic analysis, a reasoned conclusion is made that the legal provisions, enacted in different legal acts, and providing for different ways of compensation of damage done to the health of the employee, form a general (common) system of compensation of such damage.Žala darbuotojo sveikatai gali būti atlyginama keliais skirtingais būdais, kurie reglamentuojami tiek viešosios, tiek ir privačiosios teisės normomis. Straipsnyje iškeliamos darbuotojo sveikatai padarytos žalos skirtingų atlyginimo būdų tarpusavio sąveikos problemos ir pateikiami galimi jų sprendimo variantai. Sisteminės analizės būdu pagrindžiama išvada, kad skirtinguose teisės aktuose įtvirtintos teisės normos, nustatančios darbuotojo sveikatai padarytos žalos atlyginimą skirtingais būdais, sudaro bendrą tokios žalos atlyginimo sistemą. Darbuotojo sveikatai padarytos žalos atlyginimo sistema reikalauja skirtingus žalos atlyginimo būdus taikyti subsidiariai: pirmiausia dėl žalos atlyginimo turėtų būti sprendžiama remiantis viešosios teisės normomis - nelaimingų atsitikimų darbe ar susirgimų profesine liga socialinio draudimo būdu, ir tik paskui neatlyginta žalos dalis galėtų būti atlyginama remiantis darbdavio materialinės atsakomybės arba civilinės atsakomybės taisyklėmis

    Feature selection for modular GA-based classification

    Get PDF
    Genetic algorithms (GAs) have been used as conventional methods for classifiers to adaptively evolve solutions for classification problems. Feature selection plays an important role in finding relevant features in classification. In this paper, feature selection is explored with modular GA-based classification. A new feature selection technique, Relative Importance Factor (RIF), is proposed to find less relevant features in the input domain of each class module. By removing these features, it is aimed to reduce the classification error and dimensionality of classification problems. Benchmark classification data sets are used to evaluate the proposed approach. The experiment results show that RIF can be used to find less relevant features and help achieve lower classification error with the feature space dimension reduced

    Selecting the most suitable classification algorithm for supporting assistive technology adoption for people with dementia: a multicriteria framework

    Get PDF
    The number of people with dementia (PwD) is increasing dramatically. PwD exhibit impairments of reasoning, memory, and thought that require some form of self‐management intervention to support the completion of everyday activities while maintaining a level of independence. To address this need, efforts have been directed to the development of assistive technology solutions, which may provide an opportunity to alleviate the burden faced by the PwD and their carers. Nevertheless, uptake of such solutions has been limited. It is therefore necessary to use classifiers to discriminate between adopters and nonadopters of these technologies in order to avoid cost overruns and potential negative effects on quality of life. As multiple classification algorithms have been developed, choosing the most suitable classifier has become a critical step in technology adoption. To select the most appropriate classifier, a set of criteria from various domains need to be taken into account by decision makers. In addition, it is crucial to define the most appropriate multicriteria decision‐making approach for the modelling of technology adoption. Considering the above‐mentioned aspects, this paper presents the integration of a five‐phase methodology based on the Fuzzy Analytic Hierarchy Process and the Technique for Order of Preference by Similarity to Ideal Solution to determine the most suitable classifier for supporting assistive technology adoption studies. Fuzzy Analytic Hierarchy Process is used to determine the relative weights of criteria and subcriteria under uncertainty and Technique for Order of Preference by Similarity to Ideal Solution is applied to rank the classifier alternatives. A case study considering a mobile‐based self‐management and reminding solution for PwD is described to validate the proposed approach. The results revealed that the best classifier was k‐nearest‐neighbour with a closeness coefficient of 0.804, and the most important criterion when selecting classifiers is scalability. The paper also discusses the strengths and weaknesses of each algorithm that should be addressed in future research

    Resampling methods for parameter-free and robust feature selection with mutual information

    Get PDF
    Combining the mutual information criterion with a forward feature selection strategy offers a good trade-off between optimality of the selected feature subset and computation time. However, it requires to set the parameter(s) of the mutual information estimator and to determine when to halt the forward procedure. These two choices are difficult to make because, as the dimensionality of the subset increases, the estimation of the mutual information becomes less and less reliable. This paper proposes to use resampling methods, a K-fold cross-validation and the permutation test, to address both issues. The resampling methods bring information about the variance of the estimator, information which can then be used to automatically set the parameter and to calculate a threshold to stop the forward procedure. The procedure is illustrated on a synthetic dataset as well as on real-world examples

    An Integrated Approach to Analysis of Phytoplankton Images

    Full text link

    Principal Component Analysis Coupled with Artificial Neural Networks—A Combined Technique Classifying Small Molecular Structures Using a Concatenated Spectral Database

    Get PDF
    In this paper we present several expert systems that predict the class identity of the modeled compounds, based on a preprocessed spectral database. The expert systems were built using Artificial Neural Networks (ANN) and are designed to predict if an unknown compound has the toxicological activity of amphetamines (stimulant and hallucinogen), or whether it is a nonamphetamine. In attempts to circumvent the laws controlling drugs of abuse, new chemical structures are very frequently introduced on the black market. They are obtained by slightly modifying the controlled molecular structures by adding or changing substituents at various positions on the banned molecules. As a result, no substance similar to those forming a prohibited class may be used nowadays, even if it has not been specifically listed. Therefore, reliable, fast and accessible systems capable of modeling and then identifying similarities at molecular level, are highly needed for epidemiological, clinical, and forensic purposes. In order to obtain the expert systems, we have preprocessed a concatenated spectral database, representing the GC-FTIR (gas chromatography-Fourier transform infrared spectrometry) and GC-MS (gas chromatography-mass spectrometry) spectra of 103 forensic compounds. The database was used as input for a Principal Component Analysis (PCA). The scores of the forensic compounds on the main principal components (PCs) were then used as inputs for the ANN systems. We have built eight PC-ANN systems (principal component analysis coupled with artificial neural network) with a different number of input variables: 15 PCs, 16 PCs, 17 PCs, 18 PCs, 19 PCs, 20 PCs, 21 PCs and 22 PCs. The best expert system was found to be the ANN network built with 18 PCs, which accounts for an explained variance of 77%. This expert system has the best sensitivity (a rate of classification C = 100% and a rate of true positives TP = 100%), as well as a good selectivity (a rate of true negatives TN = 92.77%). A comparative analysis of the validation results of all expert systems is presented, and the input variables with the highest discrimination power are discussed

    Automated machine learning for studying the trade-off between predictive accuracy and interpretability

    Get PDF
    Automated Machine Learning (Auto-ML) methods search for the best classification algorithm and its best hyper-parameter settings for each input dataset. Auto-ML methods normally maximize only predictive accuracy, ignoring the classification model’s interpretability – an important criterion in many applications. Hence, we propose a novel approach, based on Auto-ML, to investigate the trade-off between the predictive accuracy and the interpretability of classification-model representations. The experiments used the Auto-WEKA tool to investigate this trade-off. We distinguish between white box (interpretable) model representations and two other types of model representations: black box (non-interpretable) and grey box (partly interpretable). We consider as white box the models based on the following 6 interpretable knowledge representations: decision trees, If-Then classification rules, decision tables, Bayesian network classifiers, nearest neighbours and logistic regression. The experiments used 16 datasets and two runtime limits per Auto-WEKA run: 5 h and 20 h. Overall, the best white box model was more accurate than the best non-white box model in 4 of the 16 datasets in the 5-hour runs, and in 7 of the 16 datasets in the 20-hour runs. However, the predictive accuracy differences between the best white box and best non-white box models were often very small. If we accept a predictive accuracy loss of 1% in order to benefit from the interpretability of a white box model representation, we would prefer the best white box model in 8 of the 16 datasets in the 5-hour runs, and in 10 of the 16 datasets in the 20-hour runs

    An experimental study of the intrinsic stability of random forest variable importance measures

    Get PDF
    BACKGROUND: The stability of Variable Importance Measures (VIMs) based on random forest has recently received increased attention. Despite the extensive attention on traditional stability of data perturbations or parameter variations, few studies include influences coming from the intrinsic randomness in generating VIMs, i.e. bagging, randomization and permutation. To address these influences, in this paper we introduce a new concept of intrinsic stability of VIMs, which is defined as the self-consistence among feature rankings in repeated runs of VIMs without data perturbations and parameter variations. Two widely used VIMs, i.e., Mean Decrease Accuracy (MDA) and Mean Decrease Gini (MDG) are comprehensively investigated. The motivation of this study is two-fold. First, we empirically verify the prevalence of intrinsic stability of VIMs over many real-world datasets to highlight that the instability of VIMs does not originate exclusively from data perturbations or parameter variations, but also stems from the intrinsic randomness of VIMs. Second, through Spearman and Pearson tests we comprehensively investigate how different factors influence the intrinsic stability. RESULTS: The experiments are carried out on 19 benchmark datasets with diverse characteristics, including 10 high-dimensional and small-sample gene expression datasets. Experimental results demonstrate the prevalence of intrinsic stability of VIMs. Spearman and Pearson tests on the correlations between intrinsic stability and different factors show that #feature (number of features) and #sample (size of sample) have a coupling effect on the intrinsic stability. The synthetic indictor, #feature/#sample, shows both negative monotonic correlation and negative linear correlation with the intrinsic stability, while OOB accuracy has monotonic correlations with intrinsic stability. This indicates that high-dimensional, small-sample and high complexity datasets may suffer more from intrinsic instability of VIMs. Furthermore, with respect to parameter settings of random forest, a large number of trees is preferred. No significant correlations can be seen between intrinsic stability and other factors. Finally, the magnitude of intrinsic stability is always smaller than that of traditional stability. CONCLUSION: First, the prevalence of intrinsic stability of VIMs demonstrates that the instability of VIMs not only comes from data perturbations or parameter variations, but also stems from the intrinsic randomness of VIMs. This finding gives a better understanding of VIM stability, and may help reduce the instability of VIMs. Second, by investigating the potential factors of intrinsic stability, users would be more aware of the risks and hence more careful when using VIMs, especially on high-dimensional, small-sample and high complexity datasets

    Explaining Support Vector Machines: A Color Based Nomogram.

    Get PDF
    PROBLEM SETTING: Support vector machines (SVMs) are very popular tools for classification, regression and other problems. Due to the large choice of kernels they can be applied with, a large variety of data can be analysed using these tools. Machine learning thanks its popularity to the good performance of the resulting models. However, interpreting the models is far from obvious, especially when non-linear kernels are used. Hence, the methods are used as black boxes. As a consequence, the use of SVMs is less supported in areas where interpretability is important and where people are held responsible for the decisions made by models. OBJECTIVE: In this work, we investigate whether SVMs using linear, polynomial and RBF kernels can be explained such that interpretations for model-based decisions can be provided. We further indicate when SVMs can be explained and in which situations interpretation of SVMs is (hitherto) not possible. Here, explainability is defined as the ability to produce the final decision based on a sum of contributions which depend on one single or at most two input variables. RESULTS: Our experiments on simulated and real-life data show that explainability of an SVM depends on the chosen parameter values (degree of polynomial kernel, width of RBF kernel and regularization constant). When several combinations of parameter values yield the same cross-validation performance, combinations with a lower polynomial degree or a larger kernel width have a higher chance of being explainable. CONCLUSIONS: This work summarizes SVM classifiers obtained with linear, polynomial and RBF kernels in a single plot. Linear and polynomial kernels up to the second degree are represented exactly. For other kernels an indication of the reliability of the approximation is presented. The complete methodology is available as an R package and two apps and a movie are provided to illustrate the possibilities offered by the method

    Towards emotion recognition for virtual environments: an evaluation of eeg features on benchmark dataset

    Get PDF
    One of the challenges in virtual environments is the difficulty users have in interacting with these increasingly complex systems. Ultimately, endowing machines with the ability to perceive users emotions will enable a more intuitive and reliable interaction. Consequently, using the electroencephalogram as a bio-signal sensor, the affective state of a user can be modelled and subsequently utilised in order to achieve a system that can recognise and react to the user’s emotions. This paper investigates features extracted from electroencephalogram signals for the purpose of affective state modelling based on Russell’s Circumplex Model. Investigations are presented that aim to provide the foundation for future work in modelling user affect to enhance interaction experience in virtual environments. The DEAP dataset was used within this work, along with a Support Vector Machine and Random Forest, which yielded reasonable classification accuracies for Valence and Arousal using feature vectors based on statistical measurements and band power from the and waves and High Order Crossing of the EEG signal
    corecore