20,118 research outputs found
Actionable knowledge discovery : methodologies and frameworks
University of Technology, Sydney. Faculty of Engineering and Information Technology.Most data mining algorithms and tools stop at the mining and delivery of patterns satisfying expected technical interestingness. There are often many patterns mined but business people either are not interested in them or do not know what follow-up actions to take to support their business decisions. This issue has seriously affected the widespread employment of advanced data mining techniques in greatly promoting enterprise operational quality and productivity.
In this thesis, a formal and systematic view of actionable knowledge discovery (AKD for short) has been proposed from the system and microeconomy perspectives. AKD is a closed-loop optimization problem-solving process from problem definition, framework/model design to actionable pattern discovery, and to deliver operationalizable business rules that can be seamlessly associated or integrated with business processes and systems. To support AKD, corresponding methodologies, frameworks and tools have been proposed with case studies in the real world to address critical challenges facing the traditional KDD and. to cater for crucially important factors surrounding real-life AKD.
First, a comprehensive survey and retrospection on the existing data mining methodologies, issues and challenges in actionable knowledge discovery are reviewed.
Second, a practical data mining methodology: domain driven data mining is addressed.
Third, several frameworks have been proposed to support domain drivenactionable knowledge discovery.
Fourth, case studies of domain-driven actionable pattern mining in stock markets and social security data are presented to demonstrate the usefulness and potential of the proposed domain driven actionable knowledge discovery.
In summary, this thesis explores in detail how domain driven actionable knowledge discovery can be effectively and efficiently applied to the discovery and delivery of knowledge satisfying both technical and business concerns as well as to support smart decision-making in the real world. The issues and techniques addressed in this thesis have potential to promote the research on critical KDD challenges, and contribute to the paradigm shift from data-centered and technical significance-oriented hidden pattern mining to domain-driven and balanced actionable knowledge discovery. The proposed methodologies and frameworks are flexible, general and effective to be expanded and applied to mining real-life complex data for actionable knowledge
Predictive User Modeling with Actionable Attributes
Different machine learning techniques have been proposed and used for
modeling individual and group user needs, interests and preferences. In the
traditional predictive modeling instances are described by observable
variables, called attributes. The goal is to learn a model for predicting the
target variable for unseen instances. For example, for marketing purposes a
company consider profiling a new user based on her observed web browsing
behavior, referral keywords or other relevant information. In many real world
applications the values of some attributes are not only observable, but can be
actively decided by a decision maker. Furthermore, in some of such applications
the decision maker is interested not only to generate accurate predictions, but
to maximize the probability of the desired outcome. For example, a direct
marketing manager can choose which type of a special offer to send to a client
(actionable attribute), hoping that the right choice will result in a positive
response with a higher probability. We study how to learn to choose the value
of an actionable attribute in order to maximize the probability of a desired
outcome in predictive modeling. We emphasize that not all instances are equally
sensitive to changes in actions. Accurate choice of an action is critical for
those instances, which are on the borderline (e.g. users who do not have a
strong opinion one way or the other). We formulate three supervised learning
approaches for learning to select the value of an actionable attribute at an
instance level. We also introduce a focused training procedure which puts more
emphasis on the situations where varying the action is the most likely to take
the effect. The proof of concept experimental validation on two real-world case
studies in web analytics and e-learning domains highlights the potential of the
proposed approaches
Actionable Supply Chain Management Insights for 2016 and Beyond
The summit World Class Supply Chain 2016: Critical to Prosperity , contributed to addressing a need that the Supply Chain Management (SCM) fieldâs current discourse has deemed as critical: that need is for more academia-Ââindustry collaboration to develop the fieldâs body of actionable knowledge. Held on May 4th, 2016 in Milton, Ontario, the summit addressed that need in a way that proved to be both effective and distinctive in the Canadian SCM environment. The summit, convened in partnership between Wilfrid Laurier Universityâs Lazaridis School of Business & Economics and CN Rail, focused on building actionable SCM knowledge to address three core questions: What are the most significant SCM issues to be confronted now and beyond 2016? What SCM practices are imperative now and beyond 2016? What are optimal ways of ensuring that (a) issues of interest to SCM practitioners inform the scholarly activities of research and teaching and (b) the knowledge generated from those scholarly activities reciprocally guide SCM practice?
These are important questions for supply chain professionals in their efforts to make sense of todayâs business environment that is appropriately viewed as volatile, uncertain, complex, and ambiguous. The structure of the deliberations to address these questions comprised two keynote presentations and three panel discussions, all of which were designed to leverage the collective wisdom that comes from genuine peer-Ââto-Ââpeer dialogue between the SCM practitioners and SCM scholars.
Specifically, the structure aimed for a balanced blend of industry and academic input and for coverage of the SCM issues of greatest interest to attendees (as determined through a pre-Ââsummit survey of attendees). The structure produced impressively wide-Ââranging deliberations on the aforementioned questions. The essence of the resulting findings from the summit can be distilled into three messages: Given todayâs globally significant trends such as changes in population demographics, four highly impactful levers that SCM executives must expertly handle to attain excellence are: collaboration; information; technology; and talent Government policy, especially for infrastructure, is a significant determinant of SCM excellence There is tremendous potential for mutually beneficial industry-academia knowledge co-creation/sharing aimed at research and student training
This white paper reports on those findings as well as on the summitâs success in realizing its vision of fostering mutually beneficial industry-academia dialogue. The paper also documents what emerged as matters that are inadequately understood and should therefore be targeted in the ongoing quest for deeper understanding of actionable SCM insights. Deliberations throughout the day on May 4th, 2016 and the encouraging results from the pre-Ââsummit and post-Ââsummit surveys have provided much inspiration to enthusiastically undertake that quest. The undertaking will be through initiatives that include future research projects as well as next yearâs summitâWorld Class Supply Chain 2017
Semantic Gateway as a Service architecture for IoT Interoperability
The Internet of Things (IoT) is set to occupy a substantial component of
future Internet. The IoT connects sensors and devices that record physical
observations to applications and services of the Internet. As a successor to
technologies such as RFID and Wireless Sensor Networks (WSN), the IoT has
stumbled into vertical silos of proprietary systems, providing little or no
interoperability with similar systems. As the IoT represents future state of
the Internet, an intelligent and scalable architecture is required to provide
connectivity between these silos, enabling discovery of physical sensors and
interpretation of messages between things. This paper proposes a gateway and
Semantic Web enabled IoT architecture to provide interoperability between
systems using established communication and data standards. The Semantic
Gateway as Service (SGS) allows translation between messaging protocols such as
XMPP, CoAP and MQTT via a multi-protocol proxy architecture. Utilization of
broadly accepted specifications such as W3C's Semantic Sensor Network (SSN)
ontology for semantic annotations of sensor data provide semantic
interoperability between messages and support semantic reasoning to obtain
higher-level actionable knowledge from low-level sensor data.Comment: 16 page
Interpretable Predictions of Tree-based Ensembles via Actionable Feature Tweaking
Machine-learned models are often described as "black boxes". In many
real-world applications however, models may have to sacrifice predictive power
in favour of human-interpretability. When this is the case, feature engineering
becomes a crucial task, which requires significant and time-consuming human
effort. Whilst some features are inherently static, representing properties
that cannot be influenced (e.g., the age of an individual), others capture
characteristics that could be adjusted (e.g., the daily amount of carbohydrates
taken). Nonetheless, once a model is learned from the data, each prediction it
makes on new instances is irreversible - assuming every instance to be a static
point located in the chosen feature space. There are many circumstances however
where it is important to understand (i) why a model outputs a certain
prediction on a given instance, (ii) which adjustable features of that instance
should be modified, and finally (iii) how to alter such a prediction when the
mutated instance is input back to the model. In this paper, we present a
technique that exploits the internals of a tree-based ensemble classifier to
offer recommendations for transforming true negative instances into positively
predicted ones. We demonstrate the validity of our approach using an online
advertising application. First, we design a Random Forest classifier that
effectively separates between two types of ads: low (negative) and high
(positive) quality ads (instances). Then, we introduce an algorithm that
provides recommendations that aim to transform a low quality ad (negative
instance) into a high quality one (positive instance). Finally, we evaluate our
approach on a subset of the active inventory of a large ad network, Yahoo
Gemini.Comment: 10 pages, KDD 201
Towards the second order adaptation in the next generation remote patient management systems
Remote Patient Management (RPM) systems are expected to be increasingly important for chronic disease management as they facilitate monitoring vital signs of patients at their home, alerting the care givers in case of worsening. They also provide patients with educational content. RPM systems collect a lot of (different types of) data about patients, providing an opportunity for personalizing information services. In our recent work we highlighted the importance of using available information for personalization and presented a possible next generation RPM system that enables personalization of educational content and its delivery to patients. We introduced a generic methodology for personalization and emphasized the role of knowledge discovery (KDD). In this paper we focus on the necessity of the second-order adaptation mechanisms in the RPM systems to address the challenge of continuous on-line (re)learning of actionable patterns from the patient data
Recommended from our members
Cancer Informatics for Cancer Centers (CI4CC): Building a Community Focused on Sharing Ideas and Best Practices to Improve Cancer Care and Patient Outcomes.
Cancer Informatics for Cancer Centers (CI4CC) is a grassroots, nonprofit 501c3 organization intended to provide a focused national forum for engagement of senior cancer informatics leaders, primarily aimed at academic cancer centers anywhere in the world but with a special emphasis on the 70 National Cancer Institute-funded cancer centers. Although each of the participating cancer centers is structured differently, and leaders' titles vary, we know firsthand there are similarities in both the issues we face and the solutions we achieve. As a consortium, we have initiated a dedicated listserv, an open-initiatives program, and targeted biannual face-to-face meetings. These meetings are a place to review our priorities and initiatives, providing a forum for discussion of the strategic and pragmatic issues we, as informatics leaders, individually face at our respective institutions and cancer centers. Here we provide a brief history of the CI4CC organization and meeting highlights from the latest CI4CC meeting that took place in Napa, California from October 14-16, 2019. The focus of this meeting was "intersections between informatics, data science, and population science." We conclude with a discussion on "hot topics" on the horizon for cancer informatics
Data Driven Data Mining to Domain Driven Data Mining
In the preceding decade data mining has came into sight as one of the largely energetic areas in information technology Traditional data mining is seriously dependent on data itself and relies on data oriented methodologies So there is a universal necessity in bridging the space among academia and trade is to provide all-purpose domain-related matters in surrounding real-life applications Domain-Driven Data Mining try to build up general principles methodologies and techniques for modelling and reconciling wide-ranging domain-related factors and synthesized ubiquitous intelligence adjacent problem domains with the data mining course of action and discovering knowledge to hold up business decision-makin
- âŠ