1,995 research outputs found

    Structuring visual exploratory analysis of skill demand

    No full text
    The analysis of increasingly large and diverse data for meaningful interpretation and question answering is handicapped by human cognitive limitations. Consequently, semi-automatic abstraction of complex data within structured information spaces becomes increasingly important, if its knowledge content is to support intuitive, exploratory discovery. Exploration of skill demand is an area where regularly updated, multi-dimensional data may be exploited to assess capability within the workforce to manage the demands of the modern, technology- and data-driven economy. The knowledge derived may be employed by skilled practitioners in defining career pathways, to identify where, when and how to update their skillsets in line with advancing technology and changing work demands. This same knowledge may also be used to identify the combination of skills essential in recruiting for new roles. To address the challenges inherent in exploring the complex, heterogeneous, dynamic data that feeds into such applications, we investigate the use of an ontology to guide structuring of the information space, to allow individuals and institutions to interactively explore and interpret the dynamic skill demand landscape for their specific needs. As a test case we consider the relatively new and highly dynamic field of Data Science, where insightful, exploratory data analysis and knowledge discovery are critical. We employ context-driven and task-centred scenarios to explore our research questions and guide iterative design, development and formative evaluation of our ontology-driven, visual exploratory discovery and analysis approach, to measure where it adds value to users’ analytical activity. Our findings reinforce the potential in our approach, and point us to future paths to build on

    The BPM ontology

    Get PDF
    This chapter introduces the BPM ontology that can be applied within the area of process modelling, process engineering and process architecture. At the highest level by providing the fundamental process concepts that are used to document corporate knowledge. At the lowest level by structuring the process knowledge itself in defining its relations

    Building a Data Warehouse and Data Mining for a Strategic Advantage

    Get PDF
    Technology is fundamentally changing the way companies do business. Consolidations, globalization, and deregulation have put increased pressure on managers to better understand their businesses and take them to the next level. Given the fast-paced business environment today, decision-making cycles have been shortened and managers need accurate information in a timely manner in order to make quality decisions. A properly designed and populated data warehouse can provide the relevant data necessary to make good decisions. Significant advances in computer hardware and end user software have made it easy to access, analyze, and display information at the desktop. The data companies continue to collect from their current information system provides a great source of information about its customers and processes. Data mining software programs are powerful tools that can be used to interrogate the massive amounts of data contained in the data warehouse in order to uncover relationships. To help business leaders and decision makers manage their companies effectively, companies need to make as much information as possible available and give decision-makers the tools they need to explore it according to Kapstone (1995). By implementing a data warehouse and using data mining tools companies can uncover relationships that can be used to achieve strategic advantages. First, I will explain data warehouses, why they are built, and how to build them. Second, I will cover data mining tools and the benefits companies are experiencing by using them. Finally, I will focus on the strategic advantages of building a data warehouse and extracting valuable data using sophisticated data mining tools

    Segmentation & the Jobs-to-be-done theory: A Conceptual Approach to Explaining Product Failure

    Get PDF
    Based on Christensen et al.’s research the jobs-to-be-done theory tends to hold that (market) segmentation is a theory (2004, 2003, 2003). The criticism expressed is that companies frequently allocate their market segments close to attributes, which are easy to measure and just observe consumers’ behaviour for developing new products. There exists the phenomenon that the vast majority of new products fail within a short period of time after market entry. The jobs-to-be-done theory supports that it is more important to align R&D alongside jobs consumers need to get done, jobs, which facilitate their lives and for which they searched a solution historically. The proposition the jobs-to-be-done theory offers is the identification of such jobs needing solutions, which may lead to the creation of new markets or to the extension of existing ones, which do not provide good enough products. Scholars and academics put much emphasis on the process of segmentation – targeting – positioning, as an important tool to focus organisational resources and capabilities for the achievement of sustainable positioning in a challenging market environment. Therefore, this specific theory challenges what marketing theory considers as an important strategy for market success. The proposition is that both approaches established STP and the approach by the jobs-to-be-done theory need to be well considered within strategic organisational decision making, especially for R&D and product strategy. While the traditional STP-strategy seems salient within incremental product novelties, the jobs-to-be-done theory is suggested to offer assistance for more radical product developments. This way, organisations may find themselves in a dilemma to understand, where a borderline between incremental and more radical developments may be drawn, where it will be advantageous to rely on measuring consumer behaviour by classical market research and data and at which point it will be more promising to follow the propositions of the jobs-to-be-done theory for developing successful new products. The proposition of this paper is that suggestions established by scholars’ for a sound segmentation strategy need to be contrasted with the jobs-to-be-done theory in the understanding that there are market needs for incremental improvements and parallel to these, different markets are expecting more radical solutions to get jobs done, for which existing products are not good enough. The paper’s conclusion will result in propositions of framing both these macro markets and contrasting them against each other. Key Words: Jobs-to-be-done theory, segmentation, STP-strategy, new product developmen

    Applying Data Organizational Techniques to Enhance Air Force Learning

    Get PDF
    The USAF and the DoD use traditional schoolhouses to educate and train personnel. The physical aspects of these schoolhouses limit throughput. A method to increase throughput is to shift towards an asynchronous learning environment where students move through content at individually. This research introduces a methodology for transforming a set of unstructured documents into an organized TM students can use to orient themselves in a domain. The research identifies learning paths within the TM to create a directed KSAT. We apply this methodology in four case studies, each an education or training course. Using a graph comparison metric and the topic identification rates for the TMs, we tested a whitelisting algorithm to identify topics with up to 81% accuracy, and leveraged a standalone LDA algorithm and the same LDA algorithm with ConceptNet for topic naming. The research also produced a KSAT for all case studies and two modified KSATs. The research shows that TMs and KSATs can automatically be created with minimal user input. This methodology could help increase throughput in Air Force education and training pipelines

    Role of Semantic web in the changing context of Enterprise Collaboration

    Get PDF
    In order to compete with the global giants, enterprises are concentrating on their core competencies and collaborating with organizations that compliment their skills and core activities. The current trend is to develop temporary alliances of independent enterprises, in which companies can come together to share skills, core competencies and resources. However, knowledge sharing and communication among multidiscipline companies is a complex and challenging problem. In a collaborative environment, the meaning of knowledge is drastically affected by the context in which it is viewed and interpreted; thus necessitating the treatment of structure as well as semantics of the data stored in enterprise repositories. Keeping the present market and technological scenario in mind, this research aims to propose tools and techniques that can enable companies to assimilate distributed information resources and achieve their business goals

    Graphical Database Architecture For Clinical Trials

    Get PDF
    The general area of the research is Health Informatics. The research focuses on creating an innovative and novel solution to manage and analyze clinical trials data. It constructs a Graphical Database Architecture (GDA) for Clinical Trials (CT) using New Technology for Java (Neo4j) as a robust, a scalable and a high-performance database. The purpose of the research project is to develop concepts and techniques based on architecture to accelerate the processing time of clinical data navigation at lower cost. The research design uses a positivist approach to empirical research. The research is significant because it proposes a new approach of clinical trials through graph theory and designs a responsive structure of clinical data that can be deployed across all the health informatics landscape. It uniquely contributes to scholarly literature of the phenomena of Not only SQL (NoSQL) graph databases, mainly Neo4j in CT, for future research of clinical informatics. A prototype is created and examined to validate the concepts, taking advantage of Neo4j’s high availability, scalability, and powerful graph query language (Cypher). This research study finds that integration of search methodologies and information retrieval with the graphical database provides a solid starting point to manage, query, and analyze the clinical trials data, furthermore the design and the development of a prototype demonstrate the conceptual model of this study. Likewise the proposed clinical trials ontology (CTO) incorporates all data elements of a standard clinical study which facilitate a heuristic overview of treatments, interventions, and outcome results of these studies

    Towards a unified methodology for supporting the integration of data sources for use in web applications

    Get PDF
    Organisations are making increasing use of web applications and web-based systems as an integral part of providing services. Examples include personalised dynamic user content on a website, social media plug-ins or web-based mapping tools. For these types of applications to have maximum use for the user where the applications are fully functional, they require the integration of data from multiple sources. The focus of this thesis is in improving this integration process with a focus on web applications with multiple sources of data. Integration of data from multiple sources is problematic for many reasons. Current integration methods tend to be domain specific and application specific. They are often complex, have compatibility issues with different technologies, lack maturity, are difficult to re-use, and do not accommodate new and emerging models and integration technologies. Technologies to achieve integration, such as brokers and translators do exist, but they cannot be used as a generic solution for developing web-applications achieving the integration outcomes required for successful web application development due to their domain specificity. It is because of these difficulties with integration, and the wide variety of integration approaches that there is a need to provide assistance to the developer in selecting the integration approach most appropriate to their needs. This thesis proposes GIWeb, a unified top-down data integration methodology instantiated with a framework that will aid developers in their integration process. It will act as a conceptual structure to support the chosen technical approach. The framework will assist in the integration of data sources to support web application builders. The thesis presents the rationale for the need for the framework based on an examination of the range of applications, associated data sources and the range of potential solutions. The framework is evaluated using four case studies

    Strategic business management : from planning to performance

    Get PDF
    https://egrove.olemiss.edu/aicpa_guides/2682/thumbnail.jp
    • …
    corecore