16 research outputs found

    Intelligent distributed process monitoring and management system

    Get PDF
    Monitoring systems represent an important tool to support efforts aimed at improving productivity and quality, reducing waste and enhancing safety in manufacturing. Modern technologies including electronic devices, communication technology, the Internet, database systems and modern computer technology represent resources that can provide flexible and cost accessible attractive and efficient solutions for the implementation of distributed and intelligent monitoring systems. A new generation of microcontrollers offer a high level of integrated devices and operate at low power, making them the ideal choice for many embedded industrial applications. However, the development of application software for microcontroller- based implementations has normally been a restrictive factor. Before this work this has resulted in most process and condition monitoring systems being PC based. This research presents an intelligent and distributed monitoring system based on microcontroller technology, specifically the PIC18C452. The system uses a flexible architecture that can be adapted to the necessities of different monitoring applications. "Monitoring Modules" that can be deployed according to the application requirements were developed. Industrial networks and Internet technologies are employed to enhance communication, therefore allowing monitoring records to be made available in a remote database. The Petri-net concept is used to represent the monitoring task in such a way as to provide independence from the system's hardware and software. Extensions to the original Petri-net theory and new modelling elements, including the acquisition of analogue signals, required to support the use of this method in a microcontroller-based environment, are presented. These enhancements represent a major contribution of this research. Finally, the benefits of the system are considered by means of three application examples a simple Press Rig to illustrate the general features and use of the system, a more complicated Assembly Process Rig to show the flexibility of the modelling approach, and finally a CNC Milling Machine tool changer is used to demonstrate the system in a real manufacturing application

    Knowledge sharing framework for sustainability of knowledge capital

    Get PDF
    Knowledge sharing is one of the most critical elements in a knowledgebased society. With huge concentration on communication facilities, there is a major shift in world-wide access to codified knowledge. Although communication technologies have made great strides in the development of instruments for accessing required knowledge and improving the level of knowledge sharing, there are still many obstacles which diminish the effectiveness of knowledge sharing in an organization or a community. The current challenges include: identification of the most important variables in knowledge sharing, development of an effective knowledge sharing measurement model, development of an effective mechanism for knowledge sharing reporting and calculating knowledge capital that can be created by knowledge sharing. The ability and willingness of individuals to share both their codified and uncodified knowledge have emerged as significant variables in knowledge sharing in an environment where all people have access to communication instruments and have the choice of either sharing their own knowledge or keeping it to themselves.This thesis addresses knowledge sharing variables and identifies the key variables as: willingness to share or gain knowledge, ability to share or gain knowledge, complexity or transferability of the shared knowledge. Different mechanisms are used to measure these key variables. Trust mechanisms are used to measure the willingness and ability of individuals to share or acquire knowledge. By using trust mechanisms, one can rate the behavior of the parties engaged in knowledge sharing and subsequently assign a value to the willingness and ability of individuals to share or obtain knowledge. Also, ontology mechanisms are used to measure the complexity and transferability of a particular knowledge in the knowledge sharing process. The level of similarity between sender and receiver ontologies is used to measure the transferability of a particular knowledge between knowledge sender and receiver. Ontology structure is used to measure the complexity of the knowledge transmitted between knowledge sharing parties.A knowledge sharing framework provides a measurement model for calculating knowledge sharing levels based on trust and ontology mechanisms. It calculates knowledge sharing levels numerically and also uses a Business Intelligence Simulation Model (BISIM) to simulate a community and report the knowledge sharing level between members of the simulated community. The simulated model is able to calculate and report the knowledge sharing and knowledge acquisition levels of each member in addition to the total knowledge sharing level in the community.Finally, in order to determine the advantages of knowledge sharing for a community, capital that can be created by knowledge sharing is calculated by using intellectual capital measurement mechanisms. Created capital is based on knowledge and is related to the role of knowledge sharing in increasing the embedded knowledge of individuals (human capital), improving connections, and embedding knowledge within connections (social capital). Also, market components (such as customers) play a major role in business, and knowledge sharing improves the embedded knowledge within market components that is defined as market capital in this thesis. All these categories of intellectual capital are measured and reported in the knowledge sharing framework

    Automatically Parallelizing Embedded Legacy Software on Soft-Core SoCs

    Get PDF
    Nowadays, embedded systems are utilized in many areas and become omnipresent, making people's lives more comfortable. Embedded systems have to handle more and more functionality in many products. To maintain the often required low energy consumption, multi-core systems provide high performance at moderate energy consumption. The development started with dual-core processors and has today reached many-core designs with dozens and hundreds of processor cores. However, existing applications can barely leverage the potential of that many cores. Legacy applications are usually written sequentially and thus typically use only one processor core. Thus, these applications do not benefit from the advantages provided by modern many-core systems. Rewriting those applications to use multiple cores requires new skills from developers and it is also time-consuming and highly error prone. Dozens of languages, APIs and compilers have already been presented in the past decades to aid the user with parallelizing applications. Fully automatic parallelizing compilers are seen as the holy grail, since the user effort is kept minimal. However, automatic parallelizers often cannot extract parallelism as good as user aided approaches. Most of these parallelization tools are designed for desktop and high-performance systems and are thus not tuned or applicable for low performance embedded systems. To improve this situation, this work presents an automatic parallelizer for embedded systems, which is able to mostly deliver better quality than user aided approaches and if not allows easy manual fine-tuning. Parallelization tools extract concurrently executable tasks from an application. These tasks can then be executed on different processor cores. Parallelization tools and automatic parallelizers in particular often struggle to efficiently map the extracted parallelism to an existing multi-core processor. This work uses soft-core processors on FPGAs, which makes it possible to realize custom multi-core designs in hardware, within a few minutes. This allows to adapt the multi-core processor to the characteristics of the extracted parallelism. Especially, core-interconnects for communication can be optimized to fit the communication pattern of the parallel application. Embedded applications are often structured as follows: receive input data, (multiple) data processing steps, data output. The multiple processing steps are often realized as consecutive loosely coupled transformations. These steps naturally already model the structure of a processing pipeline. It is the goal of this work to extract this kind of pipeline-parallelism from an application and map it to multiple cores to increase the overall throughput of the system. Multiple cores forming a chain with direct communication channels ideally fit this pattern. The previously described, so called pipeline-parallelism is a barely addressed concept in most parallelization tools. Also, current multi-core designs often do not support the hardware flexibility provided by soft-cores, targeted in this approach. The main contribution of this work is an automatic parallelizer which is able to map different processing steps from the source-code of a sequential application to different cores in a multi-core pipeline. Users only specify the required processing speed after parallelization. The developed tool tries to extract a matching parallelized software design along with a custom multi-core design out of sequential embedded legacy applications. The automatically created multi-core system already contains used peripherals extracted from the source-code and is ready to be used. The presented parallelizer implements multi-objective optimization to generate a minimal hardware design, just fulfilling the user defined requirement. To the best of my knowledge, the possibility to generate such a multi-core pipeline defined by the demands of the parallelized software has never been presented before. The approach is implemented for two soft-core processors and evaluation shows for both targets high speedups of 12x and higher at a reasonable hardware overhead. Compared to other automatic parallelizers, which mainly focus on speedups through latency reduction, significantly higher speedups can be achieved depending on the given application structure

    The drivers of Corporate Social Responsibility in the supply chain. A case study.

    Get PDF
    Purpose: The paper studies the way in which a SME integrates CSR into its corporate strategy, the practices it puts in place and how its CSR strategies reflect on its suppliers and customers relations. Methodology/Research limitations: A qualitative case study methodology is used. The use of a single case study limits the generalizing capacity of these findings. Findings: The entrepreneur’s ethical beliefs and value system play a fundamental role in shaping sustainable corporate strategy. Furthermore, the type of competitive strategy selected based on innovation, quality and responsibility clearly emerges both in terms of well defined management procedures and supply chain relations as a whole aimed at involving partners in the process of sustainable innovation. Originality/value: The paper presents a SME that has devised an original innovative business model. The study pivots on the issues of innovation and eco-sustainability in a context of drivers for CRS and business ethics. These values are considered fundamental at International level; the United Nations has declared 2011 the “International Year of Forestry”

    University catalog, 2019-2020

    Get PDF
    corecore