82,683 research outputs found

    Introducing semi-open learning/teaching into fundamental programming subjects

    Get PDF
    Due to the development of Internet applications, semi-open learning is increasingly being introduced into traditional face-to-face learning and teaching. In several engineering degrees many subjects are developed using a project based learning paradigm and therefore the introduction of semi-open elements seems quite natural for these subjects; it is not the case, however, of the fundamental programming subjects which are still developed using a traditional blackboard approach and have a clear lack of experience in introducing both project-based learning and semi-open learning and teaching approach. In this work we bring the two year experience gained during the innovative teaching project “Semi-open learning through sharing of information and knowledge in a virtual environment”1 at the Industrial School of Terrassa, Polytechnic University of Catalonia. The aim of the project was to develop and evaluate a methodology that would allow the introduction of semi-open learning into fundamental programming subjects using web applications. We will show here the main aspects of such methodology as well as issues we were faced to during its implementation and evaluation in a real learning/teaching environment. Our approach was implemented using the Basic Support for Collaborative Work (BSCW) though it is independent of the web application used. An additional feature of the BSCW system we have explored is the log file information on students’ actions kept by the BSCW server. We use an ad hoc software that processes the log files and stores the information in a database, which can be then used for statistical analysis. The information resulting from log files analysis is a very helpful tool for the teachers to monitor the students’ activity during the course development and intervene whenever necessary, for instance, to detect low activity students and avoid students’ abandonment. We will discuss the benefits of our approach in improving the overall learning outcome of the students and also its drawbacks especially as regards the additional amount of work it could imply for the teachers.Peer ReviewedPostprint (published version

    An Evaluation of the New York State Workers’ Compensation Pilot Program for Alternative Dispute Resolution

    Get PDF
    In 1995, the State 0f New York enacted legislation authorizing the establishment of a workers\u27 compensation alternative dispute resolution pilot program for the unionized sector of the construction industry. Collective bargaining agreements could establish an alternative dispute resolution process for resolving claims (including but not limited to mediation and arbitration), use of an agreed managed care organization or list of authorized providers for medical treatment that constitutes the exclusive source of all medical and related treatment, supplemental benefits, return-to-work programs, and vocational rehabilitation programs. The legislation also directed the School ofIndustrial and Labor Relations at Cornell University (ILR) to evaluate compliance with state and federal due process requirements provided in the collective bargaining agreements authorized by this act, and the use, costs and merits of the alternative dispute resolution system established pursuant to this act. In response to this legislative mandate, ILR reviewed the research previously conducted on alternative dispute resolution (ADR), generally, and in workers\u27 compensation. This included examining the purported advantages and disadvantages of ADR, the prevalence of ADR, and published statistical or anecdotal evidence regarding the impact of ADR. ILR created a research design for claimant-level and project-level analyses, and developed data collection instruments for these analyses that included an injured worker survey for ADR claimants and claimants in the traditional (statutory)workers\u27 compensation system, an Ombudsman\u27s log, a manual of data elements pertaining to ADR and comparison group claimants, and interview questions for ADR signatories and other officials. The findings in this report draw upon a comparison of claimant-level, descriptive statistics (averages) for injured workers in the ADR and traditional (statutory) workers\u27 compensation system; the results of more sophisticated, statistical analyses of claimant-level data; and project-level information (including, but not limited to, interviews with ADR signatories and dispute resolution officials)

    Automated knowledge capture in 2D and 3D design environments

    Get PDF
    In Life Cycle Engineering, it is vital that the engineering knowledge for the product is captured throughout its life cycle in a formal and structured manner. This will allow the information to be referred to in the future by engineers who did not work on the original design but are wanting to understand the reasons that certain design decisions were made. In the past, attempts were made to try to capture this knowledge by having the engineer record the knowledge manually during a design session. However, this is not only time-consuming but is also disruptive to the creative process. Therefore, the research presented in this paper is concerned with capturing design knowledge automatically using a traditional 2D design environment and also an immersive 3D design environment. The design knowledge is captured by continuously and non-intrusively logging the user during a design session and then storing this output in a structured eXtensible Markup Language (XML) format. Next, the XML data is analysed and the design processes that are involved can be visualised by the automatic generation of IDEF0 diagrams. Using this captured knowledge, it forms the basis of an interactive online assistance system to aid future users who are carrying out a similar design task

    The use of non-intrusive user logging to capture engineering rationale, knowledge and intent during the product life cycle

    Get PDF
    Within the context of Life Cycle Engineering it is important that structured engineering information and knowledge are captured at all phases of the product life cycle for future reference. This is especially the case for long life cycle projects which see a large number of engineering decisions made at the early to mid-stages of a product's life cycle that are needed to inform engineering decisions later on in the process. A key aspect of technology management will be the capturing of knowledge through out the product life cycle. Numerous attempts have been made to apply knowledge capture techniques to formalise engineering decision rationale and processes; however, these tend to be associated with substantial overheads on the engineer and the company through cognitive process interruptions and additional costs/time. Indeed, when life cycle deadlines come closer these capturing techniques are abandoned due the need to produce a final solution. This paper describes work carried out for non-intrusively capturing and formalising product life cycle knowledge by demonstrating the automated capture of engineering processes/rationale using user logging via an immersive virtual reality system for cable harness design and assembly planning. Associated post-experimental analyses are described which demonstrate the formalisation of structured design processes and decision representations in the form of IDEF diagrams and structured engineering change information. Potential future research directions involving more thorough logging of users are also outlined

    Overview of the personalized and collaborative information retrieval (PIR) track at FIRE-2011

    Get PDF
    The Personalized and collaborative Information Retrieval (PIR) track at FIRE 2011 was organized with an aim to extend standard information retrieval (IR) ad-hoc test collection design to facilitate research on personalized and collaborative IR by collecting additional meta-information during the topic (query) development process. A controlled query generation process through task-based activities with activity logging was used for each topic developer to construct the final list of topics. The standard ad-hoc collection is thus accompanied by a new set of thematically related topics and the associated log information. We believe this can better simulate a real-world search scenario and encourage mining user information from the logs to improve IR effectiveness. A set of 25 TREC formatted topics and the associated metadata of activity logs were released for the participants to use. In this paper we illustrate the data construction phase in detail and also outline two simple ways of using the additional information from the logs to improve retrieval effectiveness

    Improvised design of grease trap for the usage at the food premises

    Get PDF
    In Malaysia, there are many pollutions that emitted from the industry. Water pollution can be caused by various sectors, one of it is the industrial sector of the food service. The food service industry in Malaysia is growing up every day and they are the major contributor to pollution that was caused by fats, oils, and grease that are highly discharged from food premises. Grease traps are widely used by most restaurants and food processing industries to reduce oil and grease to an acceptable level before it can be discharged to public sewers [1]. Grease traps are a pipeline that traps the food waste before they enter the sanitary sewer system. The food waste is from the fats, oils, and greases and is usually found in kitchen waste water. Normally, the grease traps are located under the sink because the place is usually contributed to fat, oil and grease. Among the alternatives to reduce the emission levels of fats, oil, and grease, the uses of the grease trap is required to filter the wastewater released from the premises [2]

    Improving SIEM for critical SCADA water infrastructures using machine learning

    Get PDF
    Network Control Systems (NAC) have been used in many industrial processes. They aim to reduce the human factor burden and efficiently handle the complex process and communication of those systems. Supervisory control and data acquisition (SCADA) systems are used in industrial, infrastructure and facility processes (e.g. manufacturing, fabrication, oil and water pipelines, building ventilation, etc.) Like other Internet of Things (IoT) implementations, SCADA systems are vulnerable to cyber-attacks, therefore, a robust anomaly detection is a major requirement. However, having an accurate anomaly detection system is not an easy task, due to the difficulty to differentiate between cyber-attacks and system internal failures (e.g. hardware failures). In this paper, we present a model that detects anomaly events in a water system controlled by SCADA. Six Machine Learning techniques have been used in building and evaluating the model. The model classifies different anomaly events including hardware failures (e.g. sensor failures), sabotage and cyber-attacks (e.g. DoS and Spoofing). Unlike other detection systems, our proposed work helps in accelerating the mitigation process by notifying the operator with additional information when an anomaly occurs. This additional information includes the probability and confidence level of event(s) occurring. The model is trained and tested using a real-world dataset

    Preliminary design of the redundant software experiment

    Get PDF
    The goal of the present experiment is to characterize the fault distributions of highly reliable software replicates, constructed using techniques and environments which are similar to those used in comtemporary industrial software facilities. The fault distributions and their effect on the reliability of fault tolerant configurations of the software will be determined through extensive life testing of the replicates against carefully constructed randomly generated test data. Each detected error will be carefully analyzed to provide insight in to their nature and cause. A direct objective is to develop techniques for reducing the intensity of coincident errors, thus increasing the reliability gain which can be achieved with fault tolerance. Data on the reliability gains realized, and the cost of the fault tolerant configurations can be used to design a companion experiment to determine the cost effectiveness of the fault tolerant strategy. Finally, the data and analysis produced by this experiment will be valuable to the software engineering community as a whole because it will provide a useful insight into the nature and cause of hard to find, subtle faults which escape standard software engineering validation techniques and thus persist far into the software life cycle
    • 

    corecore