4,918 research outputs found
Limousine Service Management: Capacity Planning with Predictive Analytics and Optimization
The limousine service in luxury hotels is an integral component of the whole
customer journey in the hospitality industry. One of the largest hotels in
Singapore manages a fleet of both in-house and outsourced vehicles around the
clock, serving 9000 trips per month on average. The need for vehicles may scale
up rapidly, especially during special events and festive periods in the
country. The excess demand is met by having additional outsourced vehicles on
standby, incurring millions of dollars of additional expenses per year for the
hotel. Determining the required number of limousines by hour of the day is a
challenging service capacity planning problem. In this paper, a recent
transformational journey to manage this problem in the hotel is introduced,
driving up to S\$3.2 million of savings per year with improved service level.
The approach builds on widely available open-source statistical and spreadsheet
optimization tools, along with robotic process automation, to optimize the
schedule of its fleet of limousines and drivers, and to support decision-making
for planners/controllers to drive sustained business value
The Family of MapReduce and Large Scale Data Processing Systems
In the last two decades, the continuous increase of computational power has
produced an overwhelming flow of data which has called for a paradigm shift in
the computing architecture and large scale data processing mechanisms.
MapReduce is a simple and powerful programming model that enables easy
development of scalable parallel applications to process vast amounts of data
on large clusters of commodity machines. It isolates the application from the
details of running a distributed program such as issues on data distribution,
scheduling and fault tolerance. However, the original implementation of the
MapReduce framework had some limitations that have been tackled by many
research efforts in several followup works after its introduction. This article
provides a comprehensive survey for a family of approaches and mechanisms of
large scale data processing mechanisms that have been implemented based on the
original idea of the MapReduce framework and are currently gaining a lot of
momentum in both research and industrial communities. We also cover a set of
introduced systems that have been implemented to provide declarative
programming interfaces on top of the MapReduce framework. In addition, we
review several large scale data processing systems that resemble some of the
ideas of the MapReduce framework for different purposes and application
scenarios. Finally, we discuss some of the future research directions for
implementing the next generation of MapReduce-like solutions.Comment: arXiv admin note: text overlap with arXiv:1105.4252 by other author
Improving the Representation and Conversion of Mathematical Formulae by Considering their Textual Context
Mathematical formulae represent complex semantic information in a concise
form. Especially in Science, Technology, Engineering, and Mathematics,
mathematical formulae are crucial to communicate information, e.g., in
scientific papers, and to perform computations using computer algebra systems.
Enabling computers to access the information encoded in mathematical formulae
requires machine-readable formats that can represent both the presentation and
content, i.e., the semantics, of formulae. Exchanging such information between
systems additionally requires conversion methods for mathematical
representation formats. We analyze how the semantic enrichment of formulae
improves the format conversion process and show that considering the textual
context of formulae reduces the error rate of such conversions. Our main
contributions are: (1) providing an openly available benchmark dataset for the
mathematical format conversion task consisting of a newly created test
collection, an extensive, manually curated gold standard and task-specific
evaluation metrics; (2) performing a quantitative evaluation of
state-of-the-art tools for mathematical format conversions; (3) presenting a
new approach that considers the textual context of formulae to reduce the error
rate for mathematical format conversions. Our benchmark dataset facilitates
future research on mathematical format conversions as well as research on many
problems in mathematical information retrieval. Because we annotated and linked
all components of formulae, e.g., identifiers, operators and other entities, to
Wikidata entries, the gold standard can, for instance, be used to train methods
for formula concept discovery and recognition. Such methods can then be applied
to improve mathematical information retrieval systems, e.g., for semantic
formula search, recommendation of mathematical content, or detection of
mathematical plagiarism.Comment: 10 pages, 4 figure
Collaborative-demographic hybrid for financial: product recommendation
Internship Report presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced AnalyticsDue to the increased availability of mature data mining and analysis technologies supporting CRM
processes, several financial institutions are striving to leverage customer data and integrate insights
regarding customer behaviour, needs, and preferences into their marketing approach. As decision
support systems assisting marketing and commercial efforts, Recommender Systems applied to the
financial domain have been gaining increased attention. This thesis studies a Collaborative-
Demographic Hybrid Recommendation System, applied to the financial services sector, based on real
data provided by a Portuguese private commercial bank. This work establishes a framework to support
account managers’ advice on which financial product is most suitable for each of the bank’s corporate
clients. The recommendation problem is further developed by conducting a performance comparison
for both multi-output regression and multiclass classification prediction approaches. Experimental
results indicate that multiclass architectures are better suited for the prediction task, outperforming
alternative multi-output regression models on the evaluation metrics considered. Withal, multiclass
Feed-Forward Neural Networks, combined with Recursive Feature Elimination, is identified as the topperforming
algorithm, yielding a 10-fold cross-validated F1 Measure of 83.16%, and achieving
corresponding values of Precision and Recall of 84.34%, and 85.29%, respectively. Overall, this study
provides important contributions for positioning the bank’s commercial efforts around customers’
future requirements. By allowing for a better understanding of customers’ needs and preferences, the
proposed Recommender allows for more personalized and targeted marketing contacts, leading to
higher conversion rates, corporate profitability, and customer satisfaction and loyalty
Towards New Scenarios for Analysis of Emissions, Climate Change, Impacts, and Response Strategies
This report summarizes the findings and recommendations from the IPCC Expert Meeting on New Scenarios in Noordwijkerhout, The Netherlands, 19-21 September 2007. This report is the culmination of the combined efforts of the New Scenarios Steering Committee, an author team composed primarily of members of the research community, and numerous other meeting participants and external reviewers who provided extensive comments during the expert review process
Overcoming Challenges in Predictive Modeling of Laser-Plasma Interaction Scenarios. The Sinuous Route from Advanced Machine Learning to Deep Learning
The interaction of ultrashort and intense laser pulses with solid targets and dense plasmas is a rapidly developing area of physics, this being mostly due to the significant advancements in laser technology. There is, thus, a growing interest in diagnosing as accurately as possible the numerous phenomena related to the absorption and reflection of laser radiation. At the same time, envisaged experiments are in high demand of increased accuracy simulation software. As laser-plasma interaction modelings are experiencing a transition from computationally-intensive to data-intensive problems, traditional codes employed so far are starting to show their limitations. It is in this context that predictive modelings of laser-plasma interaction experiments are bound to reshape the definition of simulation software. This chapter focuses an entire class of predictive systems incorporating big data, advanced machine learning algorithms and deep learning, with improved accuracy and speed. Making use of terabytes of already available information (literature as well as simulation and experimental data) these systems enable the discovery and understanding of various physical phenomena occurring during interaction, hence allowing researchers to set up controlled experiments at optimal parameters. A comparative discussion in terms of challenges, advantages, bottlenecks, performances and suitability of laser-plasma interaction predictive systems is ultimately provided
- …