1,878 research outputs found

    A Literature Review on Intelligent Services Applied to Distance Learning

    Get PDF
    Distance learning has assumed a relevant role in the educational scenario. The use of Virtual Learning Environments contributes to obtaining a substantial amount of educational data. In this sense, the analyzed data generate knowledge used by institutions to assist managers and professors in strategic planning and teaching. The discovery of students’ behaviors enables a wide variety of intelligent services for assisting in the learning process. This article presents a literature review in order to identify the intelligent services applied in distance learning. The research covers the period from January 2010 to May 2021. The initial search found 1316 articles, among which 51 were selected for further studies. Considering the selected articles, 33% (17/51) focus on learning systems, 35% (18/51) propose recommendation systems, 26% (13/51) approach predictive systems or models, and 6% (3/51) use assessment tools. This review allowed for the observation that the principal services offered are recommendation systems and learning systems. In these services, the analysis of student profiles stands out to identify patterns of behavior, detect low performance, and identify probabilities of dropouts from courses.info:eu-repo/semantics/publishedVersio

    Comparing time series with machine learning-based prediction approaches for violation management in cloud SLAs

    Get PDF
    © 2018 In cloud computing, service level agreements (SLAs) are legal agreements between a service provider and consumer that contain a list of obligations and commitments which need to be satisfied by both parties during the transaction. From a service provider's perspective, a violation of such a commitment leads to penalties in terms of money and reputation and thus has to be effectively managed. In the literature, this problem has been studied under the domain of cloud service management. One aspect required to manage cloud services after the formation of SLAs is to predict the future Quality of Service (QoS) of cloud parameters to ascertain if they lead to violations. Various approaches in the literature perform this task using different prediction approaches however none of them study the accuracy of each. However, it is important to do this as the results of each prediction approach vary according to the pattern of the input data and selecting an incorrect choice of a prediction algorithm could lead to service violation and penalties. In this paper, we test and report the accuracy of time series and machine learning-based prediction approaches. In each category, we test many different techniques and rank them according to their order of accuracy in predicting future QoS. Our analysis helps the cloud service provider to choose an appropriate prediction approach (whether time series or machine learning based) and further to utilize the best method depending on input data patterns to obtain an accurate prediction result and better manage their SLAs to avoid violation penalties

    Decision support for build-to-order supply chain management through multiobjective optimization

    Get PDF
    This is the post-print version of the final paper published in International Journal of Production Economics. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2010 Elsevier B.V.This paper aims to identify the gaps in decision-making support based on multiobjective optimization (MOO) for build-to-order supply chain management (BTO-SCM). To this end, it reviews the literature available on modelling build-to-order supply chains (BTO-SC) with the focus on adopting MOO techniques as a decision support tool. The literature has been classified based on the nature of the decisions in different part of the supply chain, and the key decision areas across a typical BTO-SC are discussed in detail. Available software packages suitable for supporting decision making in BTO supply chains are also identified and their related solutions are outlined. The gap between the modelling and optimization techniques developed in the literature and the decision support needed in practice are highlighted. Future research directions to better exploit the decision support capabilities of MOO are proposed. These include: reformulation of the extant optimization models with a MOO perspective, development of decision supports for interfaces not involving manufacturers, development of scenarios around service-based objectives, development of efficient solution tools, considering the interests of each supply chain party as a separate objective to account for fair treatment of their requirements, and applying the existing methodologies on real-life data sets.Brunel Research Initiative and Enterprise Fund (BRIEF

    A phased approach to distribution network optimization given incremental supply chain change

    Get PDF
    Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division; in conjunction with the Leaders for Global Operations Program at MIT, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 59-60).This thesis addresses the question of how to optimize a distribution network when the supply chain has undergone an incremental change. A case study is presented for Company A, a major global biotechnology company that recently acquired a new manufacturing facility in Ireland. Company A already has international operations throughout Europe and the rest of the world through its network of 3rd party logistics providers, wholesalers, and distributors, as well as its own Benelux-based international distribution center. It now seeks to optimize its current network by taking into consideration the possibility of distributing product directly out of Ireland and by potentially outsourcing some of the distribution currently sourced from its Benelux facility. The thesis uses a phased approach to optimizing the network in order to tackle the common enterprise challenges of 1) building consensus around the solution and 2) simultaneously learning about the problem while attempting to solve it in order to meet a compressed project schedule. Through a number of simplifications, the thesis reduces the problem scope to a level that both enables the use of this phased approach and provides for a less-complex and less time-intense analysis manageable within the given time frame. The unique characteristics of the biotechnology industry drive the analysis to closely study direct effects of and potential risks to availability and lead-time of the various distribution options while trading off distribution, packaging, inventory, and capital expenditure costs. The recommendations resulting from the analysis described in this thesis are used to inform Company A's future distribution strategy regarding additional warehousing capacities, the continued use of the Benelux facility, as well as potential strategic partnerships with 3rd party logistics service providers.by Patrick Riechel.S.M.M.B.A

    High temporal resolution refractivity retrieval from radar phase measurements

    Get PDF
    Knowledge of the spatial and temporal variability of near-surface water vapor is of great importance to successfully model reliable radio communications systems and forecast atmospheric phenomena such as convective initiation and boundary layer processes. However, most current methods to measure atmospheric moisture variations hardly provide the temporal and spatial resolutions required for detection of such atmospheric processes. Recently, considering the high correlation between refractivity variations and water vapor pressure variations at warm temperatures, and the good temporal and spatial resolution that weather radars provide, the measurement of the refractivity with radar became of interest. Firstly, it was proposed to estimate refractivity variations from radar phase measurements of ground-based stationary targets returns. For that, it was considered that the backscattering from ground targets is stationary and the vertical gradient of the refractivity could be neglected. Initial experiments showed good results over flat terrain when the radar and target heights are similar. However, the need to consider the non-zero vertical gradient of the refractivity over hilly terrain is clear. Up to date, the methods proposed consider previous estimation of the refractivity gradient in order to correct the measured phases before the refractivity estimation. In this paper, joint estimation of the refractivity variations at the radar height and the refractivity vertical gradient variations using scan-to-scan phase measurement variations is proposed. To reduce the noisiness of the estimates, a least squares method is used. Importantly, to apply this algorithm, it is not necessary to modify the radar scanning mode. For the purpose of this study, radar data obtained during the Refractivity Experiment for H2O Research and Collaborative Operational Technology Transfer (REFRACTT_2006), held in northeastern Colorado (USA), are used. The refractivity estimates obtained show a good performance of the algorithm proposed compared to the refractivity derived from two automatic weather stations located close to the radar, demonstrating the possibility of radar based refractivity estimation in hilly terrain and non-homogeneous atmosphere with high spatial resolution.Ministerio de Economía y Competitividad | Ref. TEC2014-55735-C3-3-RXunta de Galicia | Ref. GRC2015/01

    CIRA annual report 2005-2006

    Get PDF

    Massive MIMO transmission techniques

    Get PDF
    Next generation of mobile communication systems must support astounding data traffic increases, higher data rates and lower latency, among other requirements. These requirements should be met while assuring energy efficiency for mobile devices and base stations. Several technologies are being proposed for 5G, but a consensus begins to emerge. Most likely, the future core 5G technologies will include massive MIMO (Multiple Input Multiple Output) and beamforming schemes operating in the millimeter wave spectrum. As soon as the millimeter wave propagation difficulties are overcome, the full potential of massive MIMO structures can be tapped. The present work proposes a new transmission system with bi-dimensional antenna arrays working at millimeter wave frequencies, where the multiple antenna configurations can be used to obtain very high gain and directive transmission in point to point communications. A combination of beamforming with a constellation shaping scheme is proposed, that enables good user isolation and protection against eavesdropping, while simultaneously assuring power efficient amplification of multi-level constellations

    Sensitivities of Explicit Hail Predictions and Convective Scale Ensemble Forecasting to Microphysics Parameterizations and Ensemble Data Assimilation Configurations

    Get PDF
    The explicit prediction of deep, moist convection is challenging because small model and initial condition errors rapidly grow and degrade forecast skill. Microphysics schemes employed by convection-allowing models represent a substantial source of model error because microphysical processes are poorly understood and simplifying assumptions must be made to make simulations and forecasts computationally practical. Although data assimilation systems decrease initial condition errors, analysis and forecast skill is sensitive to the experiment design. This dissertation evaluates data assimilation and ensemble forecast system performances at convection-allowing/convection-resolving resolutions, when forecast models employ different multi-moment microphysics parameterization schemes, and the data assimilation configurations are varied. We address the related issues through detailed case studies that provide insights on optimizing the configuration of convection-allowing model forecasts. First, high-resolution hail size forecasts are made for a severe hail event on 19 May 2013 using the Advanced Regional Prediction System (ARPS). Forecasts using the National Severe Storms Laboratory (NSSL) variable density rimed ice double-moment microphysics scheme (referred to as NSSL) exhibit more skill than those using the Milbrandt and Yau double-moment (MY2) or triple-moment (MY3) schemes when verified against radar-derived hail size estimates. Although all three schemes predict severe surface hail coverage with moderate to high skill, MY2 and MY3 forecasts overpredict the maximum hail size. The NSSL scheme uses the two variable density rimed ice categories to generate large, dense hail through the wet growth of graupel. Both the MY2 and MY3 schemes predict hail to be smaller above the 0 °C isotherm because the category is primarily composed of small frozen raindrops; in the melting layer the hail quickly grows because the rimed ice accretes excessive water. MY2 and MY3 forecasts predict the largest hail sizes to be smaller when the accretion water is eliminated beneath the 0 °C isotherm. To improve hailstorm forecast initial conditions, CAPS Ensemble Kalman filter (EnKF) analyses are generated for the 8 May 2017 Colorado severe hail event using either the MY2 or the NSSL scheme in the forecast model. The results of the EnKF analyses are evaluated. With each microphysics scheme two experiments are conducted where reflectivity (Z) observations update either (1) only the hydrometeor mixing ratio or (2) all hydrometeor fields. Experiments that update only hydrometeor mixing ratios can create ensemble error covariances that are unreliable which increases analysis error. Despite improving initial condition estimates, experiments that update all hydrometeor fields underestimate surface hail size, which suggests additional constraint from observations is needed during data assimilation. Correlation patterns between observation prior estimates (e.g., Z) and model state variables are evaluated to determine the impact of hail growth assumptions in the MY and NSSL schemes on the forecast error covariances between microphysical and thermodynamic variables. For the MY2 scheme, Z is negatively correlated with updraft intensity because strong updrafts produce abundant, small hail aloft. The NSSL scheme predicts storm updrafts to produce fewer but larger hailstones aloft, which causes Z and updraft intensity to be positively correlated. Hail production processes also alter the background error covariances for in-cloud air temperature and hydrometeor species. This study documents strong sensitivity of ensemble data assimilation results of hailstorms to the parameterization of microphysical processes, and the need to reduce microphysics parameterization uncertainties. To improve data assimilation configurations for potential operational implementation, EnKF data assimilation experiments based on the operational GSI system employed by the Center for Analysis and Prediction of Storms (CAPS) realtime Spring Forecast Experiments are performed, followed by 6-hour forecasts for a mesoscale convective system (MCS) event on 28-29 May 2017. Experiments are run to evaluate the sensitivity of forecast skill to the configurations of the data assimilation system. Configurations examined include the ensemble initialization and covariance inflation as well as radar observation data thinning, covariance localization radii, observation error settings, and data assimilation frequency. Spin-up ensemble forecast surface temperatures are most skilled when the initial ensemble mean is centered upon the most recent NAM analysis, causing forecasts to predict a strong MCS. Experiments that assimilate radar observations every 5 minutes are better at the placement of high Z values near observed storms but exhibit a substantial decrease in forecast skill initially because of widespread spurious convection. Ensembles that assimilate more observations with less thinning of data or use a larger horizontal covariance localization radius for radar data overpredict the coverage of high Z values due to enhanced spurious convection. Both parameters have modestly positive impacts on forecast skill during the first forecast hour that are quickly lost due to the growth of forecast error. Forecast skill is less sensitive to the ensemble spread inflation factors and observation errors tested during this study. These results provide guidance towards optimizing the GSI EnKF system configuration, for this study the data assimilation configuration employed by the 2019 CAPS Spring Forecast Experiment produces the most skilled forecasts while remaining viable for realtime use
    • …
    corecore