207,660 research outputs found
Software project economics: A roadmap
The objective of this paper is to consider research progress in the field of software project economics with a view to identifying important challenges and promising research directions. I argue that this is an important sub-discipline since this will underpin any cost-benefit analysis used to justify the resourcing, or otherwise, of a software project. To accomplish this I conducted a bibliometric analysis of peer reviewed research articles to identify major areas of activity. My results indicate that the primary goal of more accurate cost prediction systems remains largely unachieved. However, there are a number of new and promising avenues of research including: how we can combine results from primary studies, integration of multiple predictions and applying greater emphasis upon the human aspects of prediction tasks. I conclude that the field is likely to remain very challenging due to the people-centric nature of software engineering, since it is in essence a design task. Nevertheless the need for good economic models will grow rather than diminish as software becomes increasingly ubiquitous
Linear mixed models with endogenous covariates: modeling sequential treatment effects with application to a mobile health study
Mobile health is a rapidly developing field in which behavioral treatments
are delivered to individuals via wearables or smartphones to facilitate
health-related behavior change. Micro-randomized trials (MRT) are an
experimental design for developing mobile health interventions. In an MRT the
treatments are randomized numerous times for each individual over course of the
trial. Along with assessing treatment effects, behavioral scientists aim to
understand between-person heterogeneity in the treatment effect. A natural
approach is the familiar linear mixed model. However, directly applying linear
mixed models is problematic because potential moderators of the treatment
effect are frequently endogenous---that is, may depend on prior treatment. We
discuss model interpretation and biases that arise in the absence of additional
assumptions when endogenous covariates are included in a linear mixed model. In
particular, when there are endogenous covariates, the coefficients no longer
have the customary marginal interpretation. However, these coefficients still
have a conditional-on-the-random-effect interpretation. We provide an
additional assumption that, if true, allows scientists to use standard software
to fit linear mixed model with endogenous covariates, and person-specific
predictions of effects can be provided. As an illustration, we assess the
effect of activity suggestion in the HeartSteps MRT and analyze the
between-person treatment effect heterogeneity
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
How much baseline correction do we need in ERP research? Extended GLM model can replace baseline correction while lifting its limits
Baseline correction plays an important role in past and current
methodological debates in ERP research (e.g. the Tanner v. Maess debate in
Journal of Neuroscience Methods), serving as a potential alternative to strong
highpass filtering. However, the very assumptions that underlie traditional
baseline also undermine it, making it statistically unnecessary and even
undesirable and reducing signal-to-noise ratio. Including the baseline interval
as a predictor in a GLM-based statistical approach allows the data to determine
how much baseline correction is needed, including both full traditional and no
baseline correction as subcases, while reducing the amount of variance in the
residual error term and thus potentially increasing statistical power
Physiological Gaussian Process Priors for the Hemodynamics in fMRI Analysis
Background: Inference from fMRI data faces the challenge that the hemodynamic
system that relates neural activity to the observed BOLD fMRI signal is
unknown.
New Method: We propose a new Bayesian model for task fMRI data with the
following features: (i) joint estimation of brain activity and the underlying
hemodynamics, (ii) the hemodynamics is modeled nonparametrically with a
Gaussian process (GP) prior guided by physiological information and (iii) the
predicted BOLD is not necessarily generated by a linear time-invariant (LTI)
system. We place a GP prior directly on the predicted BOLD response, rather
than on the hemodynamic response function as in previous literature. This
allows us to incorporate physiological information via the GP prior mean in a
flexible way, and simultaneously gives us the nonparametric flexibility of the
GP.
Results: Results on simulated data show that the proposed model is able to
discriminate between active and non-active voxels also when the GP prior
deviates from the true hemodynamics. Our model finds time varying dynamics when
applied to real fMRI data.
Comparison with Existing Method(s): The proposed model is better at detecting
activity in simulated data than standard models, without inflating the false
positive rate. When applied to real fMRI data, our GP model in several cases
finds brain activity where previously proposed LTI models does not.
Conclusions: We have proposed a new non-linear model for the hemodynamics in
task fMRI, that is able to detect active voxels, and gives the opportunity to
ask new kinds of questions related to hemodynamics.Comment: 18 pages, 14 figure
- âŚ