260 research outputs found
Group Life Insurance: Its Legal Aspects
Background: Female representation on boards is an ongoing debate. The European Parliament voted in November 2013 in favour of a proposal that it should be at least 40 percent women on company boards by 2020. Jens Spendrup, president of the Confederation of Swedish Enterprise, was interviewed on Swedish Radio in February 2014 and stated that there are not enough qualified women to recruit to the company boards. The listed companies in Sweden only have 22 percent of women in their company boards and given Jens Spendrupâs statement should they have difficulty reaching up to 40 percent by 2020. Nevertheless, the state-owned companies has shown that it is possible with a female representation of 50 percent. The question then is what the private companies are doing wrong? Aim: The study aims to investigate the recruitment process within listed companies and state-owned companies in Sweden to see if it affects the representation of women on corporate boards. This study intends to explain why female representation is so low in the private sector relative to the state sector. Methodology: The study is qualitative in nature where empirical data is primarily collected through interviews with representatives that have insight in the recruitment process in each sector. Theory and empirical data were alternately collected which implies an iterative approach. Conclusion: We have distinguished organizational differences in the recruitment process, which is crucial for female representation. Time and resources have been identified as key parameters and age as well as experience affects the selection of candidates. We also discovered that normative regulations do not work in the private sector and therefor there is a need for a mandatory regulation.Bakgrund: Den kvinnliga representationen i bolagsstyrelser Ă€r en aktuell debatt. Europaparlamentet röstade i november 2013 ja till ett förslag att det ska vara minst 40 procent kvinnor i bolagsstyrelser senast Ă„r 2020. Jens Spendrup, ordförande i Svenskt NĂ€ringsliv, uttalade sig i Sveriges Radio i februari 2014 om att det inte finns tillrĂ€ckligt med kompetenta kvinnor att rekrytera till bolagsstyrelser, vilket blev vĂ€ldigt uppmĂ€rksammat i media. Som synes pĂ„gĂ„r debatten bĂ„de nationellt och internationellt. Börsbolagen i Sverige har 22 procent kvinnor i sina bolagsstyrelser och med tanke pĂ„ Jens Spendrups uttalande borde de ha svĂ„rt att nĂ„ 40 procent till Ă„r 2020. Dock har de statligt helĂ€gda bolagen visat att det Ă€r möjligt och har en kvinnlig representation pĂ„ 50 procent. FrĂ„gan Ă€r dĂ„ vad de privata bolagen gör för fel? Syfte: Studiens syfte Ă€r att undersöka rekryteringsprocessen inom börsbolag och statligt helĂ€gda bolag i Sverige för att se om den pĂ„verkar den kvinnliga representationen i bolagsstyrelserna. Denna studie Ă€mnar att förklara varför kvinnlig representation Ă€r sĂ„ lĂ„g inom den privata sektorn i förhĂ„llande till den statliga sektorn. Metod: Studien Ă€r av kvalitativ karaktĂ€r dĂ€r empirin frĂ€mst Ă€r insamlad genom intervjuer med representanter som har insyn i rekryteringsprocessen inom respektive sektor. Teori och empiri insamlades vĂ€xelvis vilket innebĂ€r en iterativ ansats. Slutsats: Vi har i studien urskilt organisatoriska skillnader i rekryteringsprocessen vilket Ă€r avgörande för den kvinnliga representationen. Tid och resurser har identifierats som viktiga parametrar samt att Ă„lder och erfarenhet spelar en viktig roll vid urvalet av kandidater. Vi konstaterar Ă€ven att normativa regelverk inte fungerar pĂ„ privat sektor och dĂ€rmed finns behov av tvingande regelverk
Negation detection in Swedish clinical text: An adaption of NegEx to Swedish
<p>Abstract</p> <p>Background</p> <p>Most methods for negation detection in clinical text have been developed for English text, and there is a need for evaluating the feasibility of adapting these methods to other languages. A Swedish adaption of the English rule-based negation detection system NegEx, which detects negations through the use of trigger phrases, was therefore evaluated.</p> <p>Results</p> <p>The Swedish adaption of NegEx showed a precision of 75.2% and a recall of 81.9%, when evaluated on 558 manually classified sentences containing negation triggers, and a negative predictive value of 96.5% when evaluated on 342 sentences not containing negation triggers.</p> <p>Conclusions</p> <p>The precision was significantly lower for the Swedish adaptation than published results for the English version, but since many negated propositions were identified through a limited set of trigger phrases, it could nevertheless be concluded that the same trigger phrase approach is possible in a Swedish context, even though it needs to be further developed.</p> <p>Availability</p> <p>The triggers used for the evaluation of the Swedish adaption of NegEx are available at <url>http://people.dsv.su.se/~mariask/resources/triggers.txt</url> and can be used together with the original NegEx program for negation detection in Swedish clinical text.</p
The Impact of Part-of-Speech Filtering on Generation of a Swedish-Japanese Dictionary Using English as Pivot Language
Proceedings of the 18th Nordic Conference of Computational Linguistics
NODALIDA 2011.
Editors: Bolette Sandford Pedersen, Gunta NeĆĄpore and Inguna SkadiĆa.
NEALT Proceedings Series, Vol. 11 (2011), 98-105.
© 2011 The editors and contributors.
Published by
Northern European Association for Language
Technology (NEALT)
http://omilia.uio.no/nealt .
Electronically published at
Tartu University Library (Estonia)
http://hdl.handle.net/10062/16955
Digital pulse-shape discrimination of fast neutrons and gamma rays
Discrimination of the detection of fast neutrons and gamma rays in a liquid
scintillator detector has been investigated using digital pulse-processing
techniques. An experimental setup with a 252Cf source, a BC-501 liquid
scintillator detector, and a BaF2 detector was used to collect waveforms with a
100 Ms/s, 14 bit sampling ADC. Three identical ADC's were combined to increase
the sampling frequency to 300 Ms/s. Four different digital pulse-shape analysis
algorithms were developed and compared to each other and to data obtained with
an analogue neutron-gamma discrimination unit. Two of the digital algorithms
were based on the charge comparison method, while the analogue unit and the
other two digital algorithms were based on the zero-crossover method. Two
different figure-of-merit parameters, which quantify the neutron-gamma
discrimination properties, were evaluated for all four digital algorithms and
for the analogue data set. All of the digital algorithms gave similar or better
figure-of-merit values than what was obtained with the analogue setup. A
detailed study of the discrimination properties as a function of sampling
frequency and bit resolution of the ADC was performed. It was shown that a
sampling ADC with a bit resolution of 12 bits and a sampling frequency of 100
Ms/s is adequate for achieving an optimal neutron-gamma discrimination for
pulses having a dynamic range for deposited neutron energies of 0.3-12 MeV. An
investigation of the influence of the sampling frequency on the time resolution
was made. A FWHM of 1.7 ns was obtained at 100 Ms/s.Comment: 26 pages, 14 figures, submitted to Nuclear Instruments and Methods in
Physics Research
Pre-stressed Geodesic Gridshell
Timber gridshells can cover large spaces with minimum material. However, with\ua0long-term creep deformations, small cross sections and high elasticity, there are potential\ua0stability issues. Historically, pre-stressing systems have been shown to prevent\ua0instability modes in unstable structures. In this project we investigate the benefits\ua0of pre-stressing a geodesic elastic bending-active gridshells serving as a lecture\ua0pavilion. Digital analysis and physical tests are interactively combined to study and\ua0implement various modelling and analysis techniques, pre-stress configurations and\ua0connection details. It is found that an internal pre-stressing system can significantly\ua0increase the stability of in terms of eigenfrequencies
Detection of stance and sentiment modifiers in political blogs
The automatic detection of seven types of modifiers was studied: Certainty, Uncertainty, Hypotheticality, Prediction, Recommendation, Concession/Contrast and Source. A classifier aimed at detecting local cue words that signal the categories was the most successful method for five of the categories. For Prediction and Hypotheticality, however, better results were obtained with a classifier trained on tokens and bigrams present in the entire sentence. Unsupervised cluster features were shown useful for the categories Source and Uncertainty, when a subset of the training data available was used. However, when all of the 2,095 sentences that had been actively selected and manually annotated were used as training data, the cluster features had a very limited effect. Some of the classification errors made by the models would be possible to avoid by extending the training data set, while other features and feature representations, as well as the incorporation of pragmatic knowledge, would be required for other error types
Investigating Rumor News Using Agreement-Aware Search
Recent years have witnessed a widespread increase of rumor news generated by
humans and machines. Therefore, tools for investigating rumor news have become
an urgent necessity. One useful function of such tools is to see ways a
specific topic or event is represented by presenting different points of view
from multiple sources.
In this paper, we propose Maester, a novel agreement-aware search framework
for investigating rumor news. Given an investigative question, Maester will
retrieve related articles to that question, assign and display top articles
from agree, disagree, and discuss categories to users. Splitting the results
into these three categories provides the user a holistic view towards the
investigative question. We build Maester based on the following two key
observations: (1) relatedness can commonly be determined by keywords and
entities occurring in both questions and articles, and (2) the level of
agreement between the investigative question and the related news article can
often be decided by a few key sentences. Accordingly, we use gradient boosting
tree models with keyword/entity matching features for relatedness detection,
and leverage recurrent neural network to infer the level of agreement. Our
experiments on the Fake News Challenge (FNC) dataset demonstrate up to an order
of magnitude improvement of Maester over the original FNC winning solution, for
agreement-aware search
LNCS
We present two algorithmic approaches for synthesizing linear hybrid automata from experimental data. Unlike previous approaches, our algorithms work without a template and generate an automaton with nondeterministic guards and invariants, and with an arbitrary number and topology of modes. They thus construct a succinct model from the data and provide formal guarantees. In particular, (1) the generated automaton can reproduce the data up to a specified tolerance and (2) the automaton is tight, given the first guarantee. Our first approach encodes the synthesis problem as a logical formula in the theory of linear arithmetic, which can then be solved by an SMT solver. This approach minimizes the number of modes in the resulting model but is only feasible for limited data sets. To address scalability, we propose a second approach that does not enforce to find a minimal model. The algorithm constructs an initial automaton and then iteratively extends the automaton based on processing new data. Therefore the algorithm is well-suited for online and synthesis-in-the-loop applications. The core of the algorithm is a membership query that checks whether, within the specified tolerance, a given data set can result from the execution of a given automaton. We solve this membership problem for linear hybrid automata by repeated reachability computations. We demonstrate the effectiveness of the algorithm on synthetic data sets and on cardiac-cell measurements
Negation Scope Delimitation in Clinical Text Using Three Approaches: NegEx, PyConTextNLP and SynNeg
ABSTRACT Negation detection is a key component in clinical information extraction systems, as health record text contains reasonings in which the physician excludes different diagnoses by negating them. Many systems for negation detection rely on negation cues (e.g. not), but only few studies have investigated if the syntactic structure of the sentences can be used for determining the scope of these cues. We have in this paper compared three different systems for negation detection in Swedish clinical text (NegEx, PyConTextNLP and SynNeg), which have different approaches for determining the scope of negation cues. NegEx uses the distance between the cue and the disease, PyConTextNLP relies on a list of conjunctions limiting the scope of a cue, and in SynNeg the boundaries of the sentence units, provided by a syntactic parser, limit the scope of the cues. The three systems produced similar results, detecting negation with an F-score of around 80%, but using a parser had advantages when handling longer, complex sentences or short sentences with contradictory statements
Annotating speaker stance in discourse:the Brexit Blog Corpus
The aim of this study is to explore the possibility of identifying speaker stance in discourse, provide an analytical resource for it and an evaluation of the level of agreement across speakers. We also explore to what extent language users agree about what kind of stances are expressed in natural language use or whether their interpretations diverge. In order to perform this task, a comprehensive cognitive-functional framework of ten stance categories was developed based on previous work on speaker stance in the literature. A corpus of opinionated texts was compiled, the Brexit Blog Corpus (BBC). An analytical protocol and interface (Active Learning and Visual Analytics) for the annotations was set up and the data were independently annotated by two annotators. The annotation procedure, the annotation agreements and the co-occurrence of more than one stance in the utterances are described and discussed. The careful, analytical annotation process has returned satisfactory inter- and intra-annotation agreement scores, resulting in a gold standard corpus, the final version of the BBC
- âŠ