14,462 research outputs found
Initiating organizational memories using ontology network analysis
One of the important problems in organizational memories is their initial set-up. It is difficult to choose the right information to include in an organizational memory, and the right information is also a prerequisite for maximizing the uptake and relevance of the memory content. To tackle this problem, most developers adopt heavy-weight solutions and rely on a faithful continuous interaction with users to create and improve its content. In this paper, we explore the use of an automatic, light-weight solution, drawn from the underlying ingredients of an organizational memory: ontologies. We have developed an ontology-based network analysis method which we applied to tackle the problem of identifying communities of practice in an organization. We use ontology-based network analysis as a means to provide content automatically for the initial set up of an organizational memory
Size Matters: Microservices Research and Applications
In this chapter we offer an overview of microservices providing the
introductory information that a reader should know before continuing reading
this book. We introduce the idea of microservices and we discuss some of the
current research challenges and real-life software applications where the
microservice paradigm play a key role. We have identified a set of areas where
both researcher and developer can propose new ideas and technical solutions.Comment: arXiv admin note: text overlap with arXiv:1706.0735
Interstellar: Using Halide's Scheduling Language to Analyze DNN Accelerators
We show that DNN accelerator micro-architectures and their program mappings
represent specific choices of loop order and hardware parallelism for computing
the seven nested loops of DNNs, which enables us to create a formal taxonomy of
all existing dense DNN accelerators. Surprisingly, the loop transformations
needed to create these hardware variants can be precisely and concisely
represented by Halide's scheduling language. By modifying the Halide compiler
to generate hardware, we create a system that can fairly compare these prior
accelerators. As long as proper loop blocking schemes are used, and the
hardware can support mapping replicated loops, many different hardware
dataflows yield similar energy efficiency with good performance. This is
because the loop blocking can ensure that most data references stay on-chip
with good locality and the processing units have high resource utilization. How
resources are allocated, especially in the memory system, has a large impact on
energy and performance. By optimizing hardware resource allocation while
keeping throughput constant, we achieve up to 4.2X energy improvement for
Convolutional Neural Networks (CNNs), 1.6X and 1.8X improvement for Long
Short-Term Memories (LSTMs) and multi-layer perceptrons (MLPs), respectively.Comment: Published as a conference paper at ASPLOS 202
FROM DOCUMENT MANAGEMENT TO KNOWLEDGE MANAGEMENT
Documents circulating in paper form are increasingly being substituted by itselectronic equivalent in the modern office today so that any stored document can be retrievedwhenever needed later on. The office worker is already burdened with information overload, soeffective and effcient retrieval facilities become an important factor affecting worker productivity. The key thrust of this article is to analyse the benefits and importance of interaction betweendocument management and knowledge management. Information stored in text-based documentsrepresents a valuable repository for both the individual worker and the enterprise as a whole and ithas to be tapped into as part of the knowledge generation process.document management, knowledge management, Information and communication technologies
Major project team learning:examining building information modelling
The speed of technological advancement of software development drives the need for individual and team learning to exploit these developments for competitive advantage. Using a major long term redevelopment as a case study a review of learning processes and project team learning in the context of a voluntary approach to adopting of BIM prior to 2016 is examined. The speed of adoption of BIM across a large redevelopment project covering several years is variable and the differences of preparedness between team members from different organisations raises the question of how effective the project team can be in sharing learning and increasing the speed of adoption of BIM. The benefits of understanding the project environment as a formal learning context are recognised where teams are working in partnering arrangements but the focus is usually on post project review of what went wrong with little time to critically evaluate other variables. Knowledge Management has the potential to help understand and then facilitate greater participation amongst stakeholders in project team learning. The research team undertook decision mapping and knowledge elicitation techniques and applied these to the Dundee Waterfront to identify key factors relevant to successful project management, enabling the Waterfront Project Team to understand current practice. The effectiveness of project team learning in relation to BIM within this long-term major redevelopment is influenced by positive motivational drivers for individuals to learn how to use and apply BIM, the level of organisational support for learning and professional development and the project information and communication systems. In practice the current approach to sharing of knowledge within the project team indicates a fragmented approach in relation to the adoption and application of BIM to managing construction projects
Link prediction in very large directed graphs: Exploiting hierarchical properties in parallel
Link prediction is a link mining task that tries to find new edges within a given graph. Among the targets of link prediction there is large directed graphs, which are frequent structures nowadays. The typical sparsity of large graphs demands of high precision predictions in order to obtain usable results. However, the size of those graphs only permits the execution of scalable algorithms. As a trade-off between those two problems we recently proposed a link prediction algorithm for directed graphs that exploits hierarchical properties. The algorithm can be classified as a local score, which entails scalability. Unlike the rest of local scores, our proposal assumes the existence of an underlying model for the data which allows it to produce predictions with a higher precision. We test the validity of its hierarchical assumptions on two clearly hierarchical data sets, one of them based on RDF. Then we test it on a non-hierarchical data set based on Wikipedia to demonstrate its broad applicability. Given the computational complexity of link prediction in very large graphs we also introduce some general recommendations useful to make of link prediction an efficiently parallelized problem.Peer ReviewedPostprint (published version
Knowledge management support for enterprise distributed systems
Explosion of information and increasing demands on semantic processing web applications have software systems to their limits. To address the problem we propose a semantic based formal framework (ADP) that makes use of promising technologies to enable knowledge generation and retrieval. We argue that this approach is cost effective, as it reuses and builds on existing knowledge and structure. It is also a good starting point for creating an organisational memory and providing knowledge management functions
Recommended from our members
Automatic Generation of Cognitive Theories using Genetic Programming
Cognitive neuroscience is the branch of neuroscience that studies the neural mechanisms underpinning cognition and develops theories explaining them. Within cognitive neuroscience, computational neuroscience focuses on modeling behavior, using theories expressed as computer programs. Up to now, computational theories have been formulated by neuroscientists. In this paper, we present a new approach to theory development in neuroscience: the automatic generation and testing of cognitive theories using genetic programming. Our approach evolves from experimental data cognitive theories that explain âthe mental programâ that subjects use to solve a specific task. As an example, we have focused on a typical neuroscience experiment, the delayed-match-to-sample (DMTS) task. The main goal of our approach is to develop a tool that neuroscientists can use to develop better cognitive theories
- âŠ