8,341 research outputs found
Innovation and Employability in Knowledge Management Curriculum Design
During 2007/8, Southampton Solent University worked on a Leadership Foundation project focused on the utility of the multi-functional team approach as a vehicle to deliver innovation in strategic and operational terms in higher education (HE). The Task-Orientated Multi-Functional Team Approach (TOMFTA) project took two significant undertakings for Southampton Solent as key areas for investigation, one academic and one administrative in focus. The academic project was the development of an innovative and novel degree programme in knowledge management (KM).
The new KM Honours degree programme is timely both in recognition of the increasing importance to organisations of knowledge as a commodity, and in its adoption of a distinctive structure and pedagogy. The methodology for the KM curriculum design brings together student-centred and market-driven approaches: positioning the programme for the interests of students and requirements of employers, rather than just the capabilities of staff; while looking at ways that courses can be delivered with more flexibility, e.g. accelerated and block-mode; with level-differentiated activities, common cross-year content and material that is multi-purpose for use in short courses. In order to permit context at multiple levels in common, a graduate skills strand is taught separately as part of the University’s business-facing education agenda.
The KM portfolio offers a programme of practically-based courses integrating key themes in knowledge management, business, information distribution and development of the media. They develop problem-solving, communications, teamwork and other employability skills as well as the domain skills needed by emerging information management technologies. The new courses are built on activities which focus on different aspects of KM, drawing on existing content as a knowledge base. This paper presents the ongoing development of the KM programme through the key aspects in its conception and design
Recommended from our members
The use of Bayesian networks to determine software inspection process efficiency
Adherence to a defined process or standards is necessary to achieve satisfactory software quality. However, in order to judge whether practices are effective at achieving the required integrity of a software product, a measurement-based approach to the correctness of the software development is required. A defined and measurable process is a requirement for producing safe software productively. In this study the contribution of quality assurance to the software development process, and in particular the contribution that software inspections make to produce satisfactory software products, is addressed.
I have defined a new model of software inspection effectiveness, which uses a Bayesian Belief Network to combine both subjective and objective data to evaluate the probability of an effective software inspection. Its performance shows an improvement over the existing published models of inspection effectiveness. These previous models made questionable assumptions over the distribution of errors and were essentially static. They could not make use of experience both in terms of process improvement and the increased experience of the inspectors.
A sensitivity analysis of my model showed that it is consistent with the attributes which were thought important by Michael Fagan in his research into the software inspection method. The performance of my model show that it is an improvement over published models and over a multiple logistic regression model, which was formed using the same calibration data.
By applying my model of software inspection effectiveness before the inspection takes place, project managers will be able to make better use of inspection resource available. Applying the model using data collected during the inspection will help in estimation of residual errors in a product. Decisions can then be made if further investigations are required to identify errors. The modelling process has been used successfully in an industrial application
Recommended from our members
What Turkey expects from logistics outsourcing ?
Copyright @ 2011 Yasar UniversityThe economies of the world have become increasingly interdependent, and organizations have come under tremendous pressure to maximize productivity and profitability. Creating value through outsourcing has emerged as a popular competitive strategy for firms of all sizes in all types of industries. The aim of this research is to investigate the use of third party logistics in Turkish companies from the users’ perspective to identify the types of logistics services outsourced, problems encountered in outsourcing these services, logistics costs, decision makers in outsourcing logistics activities, and information sources used in the decision-making process. A structured survey was selected as the tool for data collection. The field study involved face-to-face interviews with 204 companies out of top 500 companies ranked in terms of turnover that are registered with industrial associations and chambers of commerce in Turkey. Moreover, a decision support system based on Bayesian Causal Map is proposed for 3PLs in order to assist them in their service proposals for different sectors. This study is a first attempt to reveal and compare the outsourcing perception of the companies in different sectors, to expose the firms’ underlying motives as well as the respective importance of these motives for outsourcing logistics activities in Turkey. The use of Bayesian Causal Map based on the survey results provides an important guide to the 3PL providers to pick a suitable strategy and to prioritize their operational activities in different sectors in such a way to achieve a competitive advantage
Expert Elicitation for Reliable System Design
This paper reviews the role of expert judgement to support reliability
assessments within the systems engineering design process. Generic design
processes are described to give the context and a discussion is given about the
nature of the reliability assessments required in the different systems
engineering phases. It is argued that, as far as meeting reliability
requirements is concerned, the whole design process is more akin to a
statistical control process than to a straightforward statistical problem of
assessing an unknown distribution. This leads to features of the expert
judgement problem in the design context which are substantially different from
those seen, for example, in risk assessment. In particular, the role of experts
in problem structuring and in developing failure mitigation options is much
more prominent, and there is a need to take into account the reliability
potential for future mitigation measures downstream in the system life cycle.
An overview is given of the stakeholders typically involved in large scale
systems engineering design projects, and this is used to argue the need for
methods that expose potential judgemental biases in order to generate analyses
that can be said to provide rational consensus about uncertainties. Finally, a
number of key points are developed with the aim of moving toward a framework
that provides a holistic method for tracking reliability assessment through the
design process.Comment: This paper commented in: [arXiv:0708.0285], [arXiv:0708.0287],
[arXiv:0708.0288]. Rejoinder in [arXiv:0708.0293]. Published at
http://dx.doi.org/10.1214/088342306000000510 in the Statistical Science
(http://www.imstat.org/sts/) by the Institute of Mathematical Statistics
(http://www.imstat.org
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models.Comment: In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentar
Improved risk analysis for large projects: Bayesian networks approach
PhDGenerally risk is seen as an abstract concept which is difficult to measure. In this thesis,
we consider quantification in the broader sense by measuring risk in the context of large projects.
By improved risk measurement, it may be possible to identify and control risks in such a way that
the project is completed successfully in spite of the risks.
This thesis considers the trade-offs that may be made in project risk management,
specifically time, cost and quality. The main objective is to provide a model which addresses the
real problems and questions that project managers encounter, such as:
• If I can afford only minimal resources, how much quality is it possible to achieve?
• What resources do I need in order to achieve the highest quality possible?
• If I have limited resources and I want the highest quality, how much functionality do
I need to lose?
We propose the use of a causal risk framework that is an improvement on the traditional
modelling approaches, such as the risk register approach, and therefore contributes to better
decision making.
The approach is based on Bayesian Networks (BNs). BNs provide a framework for causal
modelling and offer a potential solution to some of the classical modelling problems. Researchers
have recently attempted to build BN models that incorporate relationships between time, cost,
quality, functionality and various process variables. This thesis analyses such BN models and as
part of a new validation study identifies their strengths and weaknesses. BNs have shown
considerable promise in addressing the aforementioned problems, but previous BN models have
not directly solved the trade-off problem. Major weaknesses are that they do not allow sensible
risk event measurement and they do not allow full trade-off analysis. The main hypothesis is that
it is possible to build BN models that overcome these limitations without compromising their
basic philosophy
- …