39,502 research outputs found
Constraining New Physics with a Positive or Negative Signal of Neutrino-less Double Beta Decay
We investigate numerically how accurately one could constrain the strengths
of different short-range contributions to neutrino-less double beta decay in
effective field theory. Depending on the outcome of near-future experiments
yielding information on the neutrino masses, the corresponding bounds or
estimates can be stronger or weaker. A particularly interesting case, resulting
in strong bounds, would be a positive signal of neutrino-less double beta decay
that is consistent with complementary information from neutrino oscillation
experiments, kinematical determinations of the neutrino mass, and measurements
of the sum of light neutrino masses from cosmological observations. The keys to
more robust bounds are improvements of the knowledge of the nuclear physics
involved and a better experimental accuracy.Comment: 23 pages, 3 figures. Minor changes. Matches version published in JHE
Deep Kernels for Optimizing Locomotion Controllers
Sample efficiency is important when optimizing parameters of locomotion
controllers, since hardware experiments are time consuming and expensive.
Bayesian Optimization, a sample-efficient optimization framework, has recently
been widely applied to address this problem, but further improvements in sample
efficiency are needed for practical applicability to real-world robots and
high-dimensional controllers. To address this, prior work has proposed using
domain expertise for constructing custom distance metrics for locomotion. In
this work we show how to learn such a distance metric automatically. We use a
neural network to learn an informed distance metric from data obtained in
high-fidelity simulations. We conduct experiments on two different controllers
and robot architectures. First, we demonstrate improvement in sample efficiency
when optimizing a 5-dimensional controller on the ATRIAS robot hardware. We
then conduct simulation experiments to optimize a 16-dimensional controller for
a 7-link robot model and obtain significant improvements even when optimizing
in perturbed environments. This demonstrates that our approach is able to
enhance sample efficiency for two different controllers, hence is a fitting
candidate for further experiments on hardware in the future.Comment: (Rika Antonova and Akshara Rai contributed equally
Multi Detector Fusion of Dynamic TOA Estimation using Kalman Filter
In this paper, we propose fusion of dynamic TOA (time of arrival) from
multiple non-coherent detectors like energy detectors operating at sub-Nyquist
rate through Kalman filtering. We also show that by using multiple of these
energy detectors, we can achieve the performance of a digital matched filter
implementation in the AWGN (additive white Gaussian noise) setting. We derive
analytical expression for number of energy detectors needed to achieve the
matched filter performance. We demonstrate in simulation the validity of our
analytical approach. Results indicate that number of energy detectors needed
will be high at low SNRs and converge to a constant number as the SNR
increases. We also study the performance of the strategy proposed using IEEE
802.15.4a CM1 channel model and show in simulation that two sub-Nyquist
detectors are sufficient to match the performance of digital matched filter
On the Reliability of LTE Random Access: Performance Bounds for Machine-to-Machine Burst Resolution Time
Random Access Channel (RACH) has been identified as one of the major
bottlenecks for accommodating massive number of machine-to-machine (M2M) users
in LTE networks, especially for the case of burst arrival of connection
requests. As a consequence, the burst resolution problem has sparked a large
number of works in the area, analyzing and optimizing the average performance
of RACH. However, the understanding of what are the probabilistic performance
limits of RACH is still missing. To address this limitation, in the paper, we
investigate the reliability of RACH with access class barring (ACB). We model
RACH as a queuing system, and apply stochastic network calculus to derive
probabilistic performance bounds for burst resolution time, i.e., the worst
case time it takes to connect a burst of M2M devices to the base station. We
illustrate the accuracy of the proposed methodology and its potential
applications in performance assessment and system dimensioning.Comment: Presented at IEEE International Conference on Communications (ICC),
201
The BFOQ Defense: Title VIIâs Concession to Gender Discrimination
Should the BFOQ exception still exist? Because permitting discrimination under Title VII seems fundamentally contrary to the anti-discrimination purpose of the statute, this article questions whether the BFOQ defense is consistent with the aims of Title VII or whether, in actuality, the defense undermines the Act\u27s effectiveness by providing a loophole for employers to participate in the discriminatory practices Title VII seeks to forbid
Closing the Book on Jusen: An Account of the Bad Loan Crisis and a New Chapter for Securitization in Japan
University business incubators (UBIs) are organizations that provide new startup companies with a support environment. However, there are split opinions on the UBIsâ contributions to the startups and the regional economy and, consequently, there are also split opinions on how to assess UBI performance. According to the resource-based view (RBV), a companyâs competitive advantage results from the various resources the company has access to. The biotechnology industry is characterized by high research intensity, weak entrepreneurial and managerial skills of the entrepreneur, huge capital requirements, and long product evelopment approval processes. Previous research has showed that these characteristics imply certain challenges for new biotech ventures. In this study, these industry specific characteristic and challenges were believed to affect what constitutes successful bioincubation and how bio-incubatorsâ performance should be assessed. The purpose of this report is, thus, to examine how bio-incubator performance can, and should be, assessed. An existing framework for assessing UBI performance is used as a basis for performing emistructured interviews with 18 incubator managers in order to examine what performance indicators are perceived as robust for assessing bio-incubator performance. The findings show that the value contributions of bio-incubators mainly include space and network provision, support services, and coaching. The perceived value contributions, in combination with the perceived challenges, imply that it is particularly appropriate to assess bio-incubators performance in terms of Job Creation, Economy Enhancement, Access to Funds, and the Incubator Offer and Internal Environment. However, Job Creation and Economy Enhancement are closely related and are therefore suggested to be merged into a single performance indicator. Hardware and Services, on the other hand, seems to be less relevant for assessing bio-incubator performance as it depends on the incubatorâs strategy. The study concludes that there are additional ways of assessing bio-incubator performance, such as shortened time to graduation, links with universities, and the flexibility of the incubator. Further research may include the entrepreneursâ point of view or use the approach of this study to examine incubator performance in other high-technology industries
A Listing of Current Books
AbstractâWe investigate cooperative strategies for relay-aided multi-source multi-destination wireless networks with backhaul support. Each source multicasts information to all destinations using a shared relay. We study cooperative strategies based on different network coding (NC) schemes, namely, finite field NC (FNC), linear NC (LNC), and lattice coding. To further exploit the backhaul connection, we also propose NC-based beam-forming (NBF). We measure the performance in term of achievable rates over Gaussian channels and observe significant gains over a benchmark scheme. The benefit of using backhaul is also clearly demonstrated in most of scenarios. I
Sales and Title and the Proposed Code
Electric powertrain faults that could occur during normal driving can affect the dynamic behaviour of the vehicle and might result in significant course deviations. The severity depends both on the characteristics of the fault itself as well as on how sensitive the vehicle reacts to this type of fault. In this work, a sensitivity study is conducted on the effects of vehicle design parameters, such as geometries and tyre characteristics, and fault characteristics. The vehicle specifications are based on three different parameter sets representing a small city car, a medium-sized sedan and a large passenger car. The evaluation criteria cover the main motions of the vehicle, i.e. longitudinal velocity difference, lateral offset and side slip angle on the rear axle as indicator of the directional stability. A design of experiments approach is applied and the influence on the course deviation is analysed for each studied parameter separately and for all first order combinations. Vehicle parameters of high sensitivity have been found for each criterion. The mass factor is highly relevant for all three motions, while the additional factors wheel base, track width, yaw inertia and vehicle velocity are mainly influencing the lateral and the yaw motion. Changes in the tyre parameters are in general less significant than the vehicle parameters. Among the tyre parameters, the stiffness factor of the tyres on the rear axle has the major influence resulting in a reduction of the course deviation for a stiffer tyre. The fault amplitude is an important fault parameter, together with the fault starting gradient and number of wheels with fault. In this study, it was found that a larger vehicle representing a SUV is more sensitive to these types of faults. To conclude, the result of an electric powertrain fault can cause significant course deviations for all three vehicle types studied.QC 20140909</p
Liking to be in America: Puerto Ricoâs Quest for Difference in the United States
The interaction between wind turbines in simple wind farm layouts is investigated with the purpose of observing the influence of wake loss phenomenon on the energy production of downwind turbines. Following an intensive exploration stage about wind farm aerodynamics and wake modeling subjects, several tests cases are designed to represent various wind farm configurations, consisting of different number of wind turbines. These cases are simulated by using DNV GL WindFarmer software which provides the opportunity of performing simulations with two different wake modeling techniques, namely Modified PARK and Eddy Viscosity. Various terrain and ambient turbulence intensity conditions are considered during the test cases. Also three different turbine types having different hub heights, rotor diameters and power-thrust coefficients are used in order to observe the effect of turbine characteristics on wake formation. Besides WindFarmer, WAsP and MATLAB tools are used in some simulation stages in order to generate input data such as wind and terrain conditions or farm layout configurations; and to process the data obtained in the end of these test cases. Simulations which are executed in the presence of a predominant wind direction from a narrow direction bin indicate that, even though there exists no significant interaction between the turbines placed in abreast configurations, successive turbine rows affect each other strongly due to the existence of the wake region of upwind turbines. It is observed that downwind spacing between turbine rows required to recover wake deficit up to a certain level changes depending on terrain and ambient turbulence intensity conditions together with turbine characteristics. For instance increasing surface roughness length (or ambient turbulence intensity) of a given site by keeping all the other parameters constant can provide up to 20% (or 30%) decrease in the required downstream distance to reduce wake loss to 5% level in a simple tandem layout consisting of two wind turbines. Further test cases are executed with various numbers of wind turbines in different configurations to observe the effect of partial, full and multiple wake regions on total farm efficiency. The results obtained from these cases are used in order to have a comparison between several farm layouts and evaluate their advantages and drawbacks
Are We Insane? The Quest for Proportionality in the Discovery Rules of the Federal Rules of Civil Procedure
Atrial fibrillation is a common heart arrhythmia which is characterized by a missing or irregular contraction of the atria. The disease is a risk factor for other more serious diseases and the total medical costs in society are extensive. Therefore it would be beneficial to improve and optimize the prevention and detection of the disease.  Pulse palpation and heart auscultation can facilitate the detection of atrial fibrillation clinically, but the diagnosis is generally confirmed by an ECG examination. Today there are several algorithms that detect atrial fibrillation by analysing an ECG. A common method is to study the heart rate variability (HRV) and by different types of statistical calculations find episodes of atrial fibrillation which deviates from normal sinus rhythm.  Two algorithms for detection of atrial fibrillation have been evaluated in Matlab. One is based on the coefficient of variation and the other uses a logistic regression model. Training and testing of the algorithms were done with data from the Physionet MIT database. Several steps of signal processing were used to remove different types of noise and artefacts before the data could be used.  When testing the algorithms, the CV algorithm performed with a sensitivity of 91,38%, a specificity of 93,93% and accuracy of 92,92%, and the results of the logistic regression algorithm was a sensitivity of 97,23%, specificity of 93,79% and accuracy of 95,39%. The logistic regression algorithm performed better and was chosen for implementation in Java, where it achieved a sensitivity of 97,31%, specificity of 93,47% and accuracy of 95,25%.Förmaksflimmer Àr en vanlig hjÀrtrytmrubbning som kÀnnetecknas av en avsaknad eller oregelbunden kontraktion av förmaken. Sjukdomen Àr en riskfaktor för andra allvarligare sjukdomar och de totala kostnaderna för samhÀllet Àr betydande. Det skulle dÀrför vara fördelaktigt att effektivisera och förbÀttra prevention samt diagnostisering av förmaksflimmer.  Kliniskt diagnostiseras förmaksflimmer med hjÀlp av till exempel pulspalpation och auskultation av hjÀrtat, men diagnosen brukar faststÀllas med en EKG-undersökning. Det finns idag flertalet algoritmer för att detektera arytmin genom att analysera ett EKG. En av de vanligaste metoderna Àr att undersöka variabiliteten av hjÀrtrytmen (HRV) och utföra olika sorters statistiska berÀkningar som kan upptÀcka episoder av förmaksflimmer som avviker frÄn en normal sinusrytm.  I detta projekt har tvÄ metoder för att detektera förmaksflimmer utvÀrderats i Matlab, en baseras pÄ berÀkningar av variationskoefficienten och den andra anvÀnder sig av logistisk regression. EKG som kommer frÄn databasen Physionet MIT anvÀnds för att trÀna och testa modeller av algoritmerna. Innan EKG-signalen kan anvÀndas mÄste den behandlas för att ta bort olika typer av brus och artefakter.  Vid test av algoritmen med variationskoefficienten blev resultatet en sensitivitet pÄ 91,38%, en specificitet pÄ 93,93% och en noggrannhet pÄ 92,92%. För logistisk regression blev sensitiviteten 97,23%, specificiteten 93,79% och noggrannheten 95,39%. Algoritmen med logistisk regression presterade bÀttre och valdes dÀrför för att implementeras i Java, dÀr uppnÄddes en sensitivitet pÄ 91,31%, en specificitet pÄ 93,47% och en noggrannhet pÄ 95,25%
- âŠ