1,871,200 research outputs found

    Analysis of Learning Management Systems According to a Holistic View on Corporate Education Services

    Get PDF
    According to the still growing importance of services and especially knowledge-based services the importance of lifelong learning increases, too. In these premises the European Union targeted a rate of workforce participating in lifelong learning to at least 15 %, the current value is 9,3 %. The main impulse for current participants in an ongoing learning process is to improve career opportunities and to perform better in their jobs.Keeping these changes in mind, corporate education services are a good example of knowledge-based services. First of all, these services integrate the customer in depth to identify their specific needs and to deliver the service. Therefore, they can be seen as a good example of services following a service-dominant logic. Secondly, this sector gains on importance due to the economic as well as the demographic changes. Thirdly, corporate education services bear potential for economic growth. In 2008 market had a volume of 26,5 billion Euro in Germany. With the aspired increase in lifelong learning there is still potential to increase this number.Therefore this paper examines the potentials of current learning management systems to support corporate education services from a holistic perspective based on Kirkpatricks Four-Level Model. Based on this analysis potentials for further improvements of the support of the learning process are derived

    Food Microstructure and Fat Content Affect Growth Morphology, Growth Kinetics, and Preferred Phase for Cell Growth of Listeria monocytogenes in Fish-Based Model Systems

    Get PDF
    Food microstructure significantly affects microbial growth dynamics, but knowledge concerning the exact influencing mechanisms at a microscopic scale is limited. The food microstructural influence on Listeria monocytogenes (green fluorescent protein strain) growth at 10°C in fish-based food model systems was investigated by confocal laser scanning microscopy. The model systems had different microstructures, i.e., liquid, xanthan (high-viscosity liquid), aqueous gel, and emulsion and gelled emulsion systems varying in fat content. Bacteria grew as single cells, small aggregates, and microcolonies of different sizes (based on colony radii [size I, 1.5 to 5.0 μm; size II, 5.0 to 10.0 μm; size III, 10.0 to 15.0 μm; and size IV, ≥15 μm]). In the liquid, small aggregates and size I microcolonies were predominantly present, while size II and III microcolonies were predominant in the xanthan and aqueous gel. Cells in the emulsions and gelled emulsions grew in the aqueous phase and on the fat-water interface. A microbial adhesion to solvent assay demonstrated limited bacterial nonpolar solvent affinities, implying that this behavior was probably not caused by cell surface hydrophobicity. In systems containing 1 and 5% fat, the largest cell volume was mainly represented by size I and II microcolonies, while at 10 and 20% fat a few size IV microcolonies comprised nearly the total cell volume. Microscopic results (concerning, e.g., growth morphology, microcolony size, intercolony distances, and the preferred phase for growth) were related to previously obtained macroscopic growth dynamics in the model systems for an L. monocytogenes strain cocktail, leading to more substantiated explanations for the influence of food microstructural aspects on lag phase duration and growth rate. IMPORTANCE Listeria monocytogenes is one of the most hazardous foodborne pathogens due to the high fatality rate of the disease (i.e., listeriosis). In this study, the growth behavior of L. monocytogenes was investigated at a microscopic scale in food model systems that mimic processed fish products (e.g., fish paté and fish soup), and the results were related to macroscopic growth parameters. Many studies have previously focused on the food microstructural influence on microbial growth. The novelty of this work lies in (i) the microscopic investigation of products with a complex composition and/or structure using confocal laser scanning microscopy and (ii) the direct link to the macroscopic level. Growth behavior (i.e., concerning bacterial growth morphology and preferred phase for growth) was more complex than assumed in common macroscopic studies. Consequently, the effectiveness of industrial antimicrobial food preservation technologies (e.g., thermal processing) might be overestimated for certain products, which may have critical food safety implications.acceptedVersio

    Cancer classification in the genomic era: five contemporary problems

    Full text link
    Abstract Classification is an everyday instinct as well as a full-fledged scientific discipline. Throughout the history of medicine, disease classification is central to how we develop knowledge, make diagnosis, and assign treatment. Here, we discuss the classification of cancer and the process of categorizing cancer subtypes based on their observed clinical and biological features. Traditionally, cancer nomenclature is primarily based on organ location, e.g., “lung cancer” designates a tumor originating in lung structures. Within each organ-specific major type, finer subgroups can be defined based on patient age, cell type, histological grades, and sometimes molecular markers, e.g., hormonal receptor status in breast cancer or microsatellite instability in colorectal cancer. In the past 15+ years, high-throughput technologies have generated rich new data regarding somatic variations in DNA, RNA, protein, or epigenomic features for many cancers. These data, collected for increasingly large tumor cohorts, have provided not only new insights into the biological diversity of human cancers but also exciting opportunities to discover previously unrecognized cancer subtypes. Meanwhile, the unprecedented volume and complexity of these data pose significant challenges for biostatisticians, cancer biologists, and clinicians alike. Here, we review five related issues that represent contemporary problems in cancer taxonomy and interpretation. (1) How many cancer subtypes are there? (2) How can we evaluate the robustness of a new classification system? (3) How are classification systems affected by intratumor heterogeneity and tumor evolution? (4) How should we interpret cancer subtypes? (5) Can multiple classification systems co-exist? While related issues have existed for a long time, we will focus on those aspects that have been magnified by the recent influx of complex multi-omics data. Exploration of these problems is essential for data-driven refinement of cancer classification and the successful application of these concepts in precision medicine.http://deepblue.lib.umich.edu/bitstream/2027.42/134599/1/40246_2015_Article_49.pd

    Practical Systems for Personal Thermal Comfort

    Get PDF
    Conventional centralized HVAC systems cannot provide office workers with personalized thermal comfort because workers in a single zone share a common air handling unit and thus a single air temperature. Moreover, they heat or cool an entire zone even if a single worker is present, which can waste energy. Both drawbacks are addressed by Personal Environmental Control (PEC) systems that modify the thermal envelope around a worker’s body to provide personalized comfort. However, most PEC systems are both expensive and difficult to deploy, making them unsuitable for large-scale deployment. In contrast, we present two novel PEC systems: SPOTlight and its successor OpenTherm. These systems are carefully designed for practical, rapid, and scalable deployment. Intuitive web-based interfaces for user controls allow OpenTherm to be installed in only about 15 minutes, including user training. It is also low-cost (as low as US$80, in volume) because it uses the fewest possible sensors and a lightweight compute engine that can optionally be located in the cloud. In this thesis, we present the detailed design of SPOTlight and OpenTherm systems, and results from a cumulative 81 months of OpenTherm’s operation in 15 offices. In our objective evaluation, we find that OpenTherm improved user comfort by ∼67%. This is in comparison to when only the central HVAC system with no knowledge of occupancy and inability to control offices individually is used

    Clustering of reads with alignment-free measures and quality values

    Get PDF
    BACKGROUND: The data volume generated by Next-Generation Sequencing (NGS) technologies is growing at a pace that is now challenging the storage and data processing capacities of modern computer systems. In this context an important aspect is the reduction of data complexity by collapsing redundant reads in a single cluster to improve the run time, memory requirements, and quality of post-processing steps like assembly and error correction. Several alignment-free measures, based on k-mers counts, have been used to cluster reads. Quality scores produced by NGS platforms are fundamental for various analysis of NGS data like reads mapping and error detection. Moreover future-generation sequencing platforms will produce long reads but with a large number of erroneous bases (up to 15 %). RESULTS: In this scenario it will be fundamental to exploit quality value information within the alignment-free framework. To the best of our knowledge this is the first study that incorporates quality value information and k-mers counts, in the context of alignment-free measures, for the comparison of reads data. Based on this principles, in this paper we present a family of alignment-free measures called D(q)-type. A set of experiments on simulated and real reads data confirms that the new measures are superior to other classical alignment-free statistics, especially when erroneous reads are considered. Also results on de novo assembly and metagenomic reads classification show that the introduction of quality values improves over standard alignment-free measures. These statistics are implemented in a software called QCluster (http://www.dei.unipd.it/~ciompin/main/qcluster.html)

    A Boxology of Design Patterns for Hybrid Learning and Reasoning Systems

    Full text link
    We propose a set of compositional design patterns to describe a large variety of systems that combine statistical techniques from machine learning with symbolic techniques from knowledge representation. As in other areas of computer science (knowledge engineering, software engineering, ontology engineering, process mining and others), such design patterns help to systematize the literature, clarify which combinations of techniques serve which purposes, and encourage re-use of software components. We have validated our set of compositional design patterns against a large body of recent literature.Comment: 12 pages,55 reference

    Analysis and design of multiagent systems using MAS-CommonKADS

    Get PDF
    This article proposes an agent-oriented methodology called MAS-CommonKADS and develops a case study. This methodology extends the knowledge engineering methodology CommonKADSwith techniquesfrom objectoriented and protocol engineering methodologies. The methodology consists of the development of seven models: Agent Model, that describes the characteristics of each agent; Task Model, that describes the tasks that the agents carry out; Expertise Model, that describes the knowledge needed by the agents to achieve their goals; Organisation Model, that describes the structural relationships between agents (software agents and/or human agents); Coordination Model, that describes the dynamic relationships between software agents; Communication Model, that describes the dynamic relationships between human agents and their respective personal assistant software agents; and Design Model, that refines the previous models and determines the most suitable agent architecture for each agent, and the requirements of the agent network
    corecore