5,383 research outputs found
A Look Back To Look Forward: New Patterns In The Supply/Demand Equation In The Lodging Industry
In his dialogue entitled - A Look Back to Look Forward: New Patterns In The Supply/Demand Equation In The Lodging Industry - by Albert J. Gomes, Senior Principal, Pannell Kerr Forster, Washington, D.C. What the author intends for you to know is the following: âFactors which influence the lodging industry in the United States are changing that industry as far as where hotels are being located, what clientele is being served, and what services are being provided at different facilities. The author charts these changes and makes predictions for the future.â
Gomes initially alludes to the evolution of transportation â the human, animal, mechanical progression - and how those changes, in the last 100 years or so, have had a significant impact on the hotel industry.
âA look back to look forward treats the past as prologue. American hoteliers are in for some startling changes in their business,â Gomes says. âThe man who said that the three most important determinants for the success of a hotel were âlocation, location, locationâ did a lot of good only in the short run.â
Gomes wants to make you aware of the existence of what he calls, âlocational obsolescence.â âLocational obsolescence is a fact of life, and at least in the United States bears a direct correlation to evolutionary changes in transportation technology,â he says. ââŠthe primary business of the hospitality industry is to serve travelers or people who are being transported,â Gomes expands the point.
Tied to the transportation element, the author also points out an interesting distinction between hotels and motels. In addressing, ââŠwhat clientele is being served, and what services are being provided at different facilities,â Gomes suggests that the transportation factor influences these constituents as well.
Also coupled with this discussion are oil prices and shifts in transportation habits, with reference to airline travel being an ever increasing method of travel; capturing much of the inter-city travel market. Gomes refers to airline deregulation as an impetus. The point being, itâs a fluid market rather than a static one, and [successful] hospitality properties need to be cognizant of market dynamics and be able to adjust to the variables in their marketplace. Gomes provides many facts and figures to bolster his assertions.
Interestingly and perceptively, at the time of this writing, Gomes alludes to Americaâs deteriorating road and bridge network. As of right now, in 2009, this is a major issue.
Gomes rounds out this study by comparing European hospitality trends to those in the U.S
INTELLIGENT AUTOMATION IN SUPPLY CHAIN OPTIMIZATION
Intelligent automation (IA) is transforming supply chain management by integrating advanced technologies such as artificial intelligence (AI), machine learning (ML), robotics, and the Internet of Things (IoT). This paper explores how IA optimizes supply chain processes, enhances operational efficiency, and drives strategic decision-making. By analyzing the impact of intelligent automation on various supply chain functions, including inventory management, logistics, and demand forecasting, this research highlights the critical role of IA in achieving agility and responsiveness in increasingly competitive markets. Additionally, the paper discusses the challenges organizations face in implementing IA solutions and provides insights into best practices for successful integration. The findings underscore the importance of leveraging intelligent automation as a key driver of supply chain optimization in today's digital landscape
Look At Me, No Replay! SurpriseNet: Anomaly Detection Inspired Class Incremental Learning
Continual learning aims to create artificial neural networks capable of
accumulating knowledge and skills through incremental training on a sequence of
tasks. The main challenge of continual learning is catastrophic interference,
wherein new knowledge overrides or interferes with past knowledge, leading to
forgetting. An associated issue is the problem of learning "cross-task
knowledge," where models fail to acquire and retain knowledge that helps
differentiate classes across task boundaries. A common solution to both
problems is "replay," where a limited buffer of past instances is utilized to
learn cross-task knowledge and mitigate catastrophic interference. However, a
notable drawback of these methods is their tendency to overfit the limited
replay buffer. In contrast, our proposed solution, SurpriseNet, addresses
catastrophic interference by employing a parameter isolation method and
learning cross-task knowledge using an auto-encoder inspired by anomaly
detection. SurpriseNet is applicable to both structured and unstructured data,
as it does not rely on image-specific inductive biases. We have conducted
empirical experiments demonstrating the strengths of SurpriseNet on various
traditional vision continual-learning benchmarks, as well as on structured data
datasets. Source code made available at https://doi.org/10.5281/zenodo.8247906
and https://github.com/tachyonicClock/SurpriseNet-CIKM-2
A hybrid method for quantum dynamics simulation
We propose a hybrid approach to simulate quantum many body dynamics by
combining Trotter based quantum algorithm with classical dynamic mode
decomposition. The interest often lies in estimating observables rather than
explicitly obtaining the wave function's form. Our method predicts observables
of a quantum state in the long time by using data from a set of short time
measurements from a quantum computer. The upper bound for the global error of
our method scales as with a fixed set of the measurement. We apply
our method to quench dynamics in Hubbard model and nearest neighbor spin
systems and show that the observable properties can be predicted up to a
reasonable error by controlling the number of data points obtained from the
quantum measurements.Comment: 9 pages, 4 figure
Evaluation of lidocaine and mepivacaine for inferior third molar surgery
Objective: The aim of this study was to compare 2% lidocaine and 2% mepivacaine with 1:100,000 epinephrine for postoperative pain control. Study design: A group of 35 patients, both genders were recruited, whose had ages ranged from 13 to 27 years-old and had two inferior third molars in similar positions to be extracted. The cartridges were distributed to the patients according to a randomised pattern, where lidocaine was in the control group and mepivacaine in the experimental group. Results: Results showed no significant association between the anesthetics and postoperative pain, pulp sensibility after one hour, gender, tooth position and duration of the surgical procedure. Conclusions: It was shown that lidocaine and mepivacaine have similar time of anesthesia, they are adequate for surgical procedures that last one hour, and there was no difference between the two anesthetics in relation to the severety of post-operative pain
A Survey on Semi-Supervised Learning for Delayed Partially Labelled Data Streams
Unlabelled data appear in many domains and are particularly relevant to
streaming applications, where even though data is abundant, labelled data is
rare. To address the learning problems associated with such data, one can
ignore the unlabelled data and focus only on the labelled data (supervised
learning); use the labelled data and attempt to leverage the unlabelled data
(semi-supervised learning); or assume some labels will be available on request
(active learning). The first approach is the simplest, yet the amount of
labelled data available will limit the predictive performance. The second
relies on finding and exploiting the underlying characteristics of the data
distribution. The third depends on an external agent to provide the required
labels in a timely fashion. This survey pays special attention to methods that
leverage unlabelled data in a semi-supervised setting. We also discuss the
delayed labelling issue, which impacts both fully supervised and
semi-supervised methods. We propose a unified problem setting, discuss the
learning guarantees and existing methods, explain the differences between
related problem settings. Finally, we review the current benchmarking practices
and propose adaptations to enhance them
Preferential survival in models of complex ad hoc networks
There has been a rich interplay in recent years between (i) empirical
investigations of real world dynamic networks, (ii) analytical modeling of the
microscopic mechanisms that drive the emergence of such networks, and (iii)
harnessing of these mechanisms to either manipulate existing networks, or
engineer new networks for specific tasks. We continue in this vein, and study
the deletion phenomenon in the web by following two different sets of web-sites
(each comprising more than 150,000 pages) over a one-year period. Empirical
data show that there is a significant deletion component in the underlying web
networks, but the deletion process is not uniform. This motivates us to
introduce a new mechanism of preferential survival (PS), where nodes are
removed according to a degree-dependent deletion kernel. We use the mean-field
rate equation approach to study a general dynamic model driven by Preferential
Attachment (PA), Double PA (DPA), and a tunable PS, where c nodes (c<1) are
deleted per node added to the network, and verify our predictions via
large-scale simulations. One of our results shows that, unlike in the case of
uniform deletion, the PS kernel when coupled with the standard PA mechanism,
can lead to heavy-tailed power law networks even in the presence of extreme
turnover in the network. Moreover, a weak DPA mechanism, coupled with PS, can
help make the network even more heavy-tailed, especially in the limit when
deletion and insertion rates are almost equal, and the overall network growth
is minimal. The dynamics reported in this work can be used to design and
engineer stable ad hoc networks and explain the stability of the power law
exponents observed in real-world networks.Comment: 9 pages, 6 figure
Lorentz-CPT violation, radiative corrections and finite temperature
In this work we investigate the radiatively induced Chern-Simons-like terms
in four-dimensions at zero and finite temperature. We use the approach of
rationalizing the fermion propagator up to the leading order in the
CPT-violating coupling . In this approach, we have shown that although
the coefficient of Chern-Simons term can be found unambiguously in different
regularization schemes at zero or finite temperature, it remains undetermined.
We observe a correspondence among results obtained at finite and zero
temperature.Comment: To appear in JHEP, 10 pages, 1 eps figure, minor changes and
references adde
Adaptive random forests for evolving data stream classification
Random forests is currently one of the most used machine learning algorithms in the non-streaming (batch) setting. This preference is attributable to its high learning performance and low demands with respect to input preparation and hyper-parameter tuning. However, in the challenging context of evolving data streams, there is no random forests algorithm that can be considered state-of-the-art in comparison to bagging and boosting based algorithms. In this work, we present the adaptive random forest (ARF) algorithm for classification of evolving data streams. In contrast to previous attempts of replicating random forests for data stream learning, ARF includes an effective resampling method and adaptive operators that can cope with different types of concept drifts without complex optimizations for different data sets. We present experiments with a parallel implementation of ARF which has no degradation in terms of classification performance in comparison to a serial implementation, since trees and adaptive operators are independent from one another. Finally, we compare ARF with state-of-the-art algorithms in a traditional test-then-train evaluation and a novel delayed labelling evaluation, and show that ARF is accurate and uses a feasible amount of resources
- âŠ