14 research outputs found

    Utilization of sensors and SMS technology to remotely maintain the level of dissolved oxygen, salinity and temperature of fishponds

    Get PDF
    Due to the occurrence of fish kills in various fish producing areas in our country, millions of pesos and opportunities for the Filipino people had been put into waste. Bataan Peninsula State University (BPSU) collaborated with the Central Luzon Association of Small-scale Aquaculture to devise strategies to address the said problem and prevent further losses. More often than not, a fish kill can be attributed to the low level of dissolved oxygen (DO) in the water, decrease or increase in salinity and sudden increase in temperature, which usually occur after heavy rainfall, flooding or high tide, or high levels of ammonia due to decomposing organic matter and high temperature during summer. For these reasons, BPSU researchers tested the use of radio frequencies and installed sensors in different areas of the fishpond at various depths to remotely monitor the levels of DO, salinity and temperature of the water. Once these reach critical levels, the installed system which comes with a specific program, will send an alarm through radio frequencies via Short Messaging Services (SMS) technology on the cellular/mobile phone of the caretaker or the fishpond operator. Upon receiving the alarm, caretakers were able to adjust the levels of dissolved oxygen, salinity and temperature of the water by remotely switching on the air compressor or the electric water pump using their cellular/ mobile phone, thus preventing losses due to fish kills

    Process orientation and information management : the case of achieving success in the pharmaceutical industry

    Get PDF
    In today’s business environment, organisations have recognised the importance of information. Consequently the use of IT is regarded as a primary means of enhancing business improvement. This has resulted in a number organisations focusing on different means of structuring, and process orientation is considered to be the most suitable option. This paper explores issues related with managing information and the implications on key operational issues. Various concerns are exemplified by drawing examples from the pharmaceutical industry

    Practical Implications of Real Time Business Intelligence

    Get PDF
    The primary purpose of business intelligence is to improve the quality of decisions while decreasing the time it takes to make them. Because focus is required on internal as well as external factors, it is critical to decrease data latency, improve report performance and decrease systems resource consumption. This article will discuss the successful implementation of a BI reporting project directly against an OLTP planning solver. The planning solver imports data concerning supply, demand, capacity, bill of materials, inventory and the like. It then uses linear programming to determine the correct product mix to produce at various factories worldwide. The article discusses the challenges faced and a working model in which real-time BI was achieved by providing data to a separate BI server in an innovativeway resulting in decreased latency, reduced resource consumption and improved performance. We demonstrated an alternative approach to hosting data for the BI application separately by loading BI and solver databases at the same time, resulting in faster access to information

    Supporting Database Designers in Entity-Relationship Modeling: An Ontology- Based Approach

    Get PDF
    Database design has long been recognized as a difficult problem, requiring a great deal of skill on the part of the designer. Research has been carried out that provides methodologies and rules for creating good designs. There have even been attempts to automate the design process. However, before these can be truly successful, methodologies and tools are needed that can incorporate and use domain knowledge. In this research, a methodology for supporting database design is proposed that makes use of domain-specific knowledge about an application, which is stored in the form of ontologies. The ontologies provide information that is useful in both the creation of new designs and the verification of existing ones. They also capture the constraints of an application domain. A methodology for assisting database design that takes advantage of the ontologies has been implemented in a prototype system. Initial testing of the prototype illustrates that the incorporation and use of ontologies are effective in creating database design

    Where Information Systems Research Meets Artificial Intelligence Practice: Towards the Development of an AI Capability Framework

    Get PDF
    Information systems (IS) research has always been one of the leading applied research areas in the investigation of technology-related phenomena. Meanwhile, for the past 10 years, artificial intelligence (AI) has transformed every aspect of society more than any other technological innovation. Thus, this is the right time for IS research to foster more quality and high-impact research on AI starting by organizing the cumulated body of knowledge on AI in IS research. We propose a framework called AI capability framework that would provide pertinent and relevant guidance for conducting IS research on AI. Since AI is a fast-evolving phenomenon, this framework is founded on the main AI capabilities that shape today’s fast-moving AI ecosystem. Thus, it is crucial that such a framework engages both AI research and practice into a continuous and evolving dialogue

    Learner satisfaction and learning performance in online courses on bioterrorism and weapons of mass destruction

    Get PDF
    This study examined the relationships between measures of (a) learner satisfaction with online courses on weapons of mass destruction (WMD) and bioterrorism intended to address the educational needs of responder Communities of Practice (CoP) and (b) degrees of accomplishment by the learner with those online courses. Provided that course design characteristics were similar between courses and that content was different, it was important to examine learner satisfaction with course common aspects in relation to learning outcomes and identify the predictors of effectiveness and relations between the learner satisfaction with the course characteristics and the learner achievement for potential design improvements in the future. Specifically, the investigator set out to explore multiple measures of learner satisfaction (Content, Accuracy, Navigation, Look, Flow, Assessment, and Value) in relation to multiple measures of learner achievement (Pre-Post Gain, Follow-up Personal Benefit, Follow-up Organizational Benefit, Follow-up Subject-Matter Retention, and Follow-up Simulation Scenarios).;The results from the 67 participants\u27 data analyses indicated that (1) navigation appeared to be a statistically significant predictor of learning achievement scores and (2) estimate of personal benefit was associated with value judgments placed on the course. Those participants who initially estimated that the courses were valuable later indicated that those courses had personal benefit to them. The learner\u27s initial satisfaction with navigation was related to the determination of personal benefit from the course. The study contributes to further understanding web-based, process-product, and satisfaction-learning interactions by emphasizing the importance of navigation quality in web-based courseware as it relates to learning achievement and personal benefit for adult learners. The findings heighten the designers\u27 awareness of the courseware aspects associated with learning effectiveness of exponentially growing web-based education on WMD and bioterrorism for responder communities

    Mining the Qualitative from the Quantitative: Re-Evaluating Cemetery Survey for the Field of Historic Preservation

    Get PDF
    This thesis proposes to critique the process of the survey of historic cemeteries and the data and information that can be generated through them through the eyes of a historic preservationist, focusing on the data collection over the past 30 years at St. Louis Number Two Cemetery in New Orleans, Louisiana. Through this research, the process of cemetery survey is described from beginning to end, including what brings it about, what types of data are collected by what types of groups, and finally, a brief glimpse into the types of data analysis and measurement that can be done using this data. The conclusions of this research generate further questions on how to better collect, manage, and mine this data to make important contributions to the field of historic preservation

    An intelligent teaching system for database modeling.

    Get PDF
    Database (DB) modelling, like other analysis and design tasks, can only be learnt through extensive practice. Conventionally, DB modelling is taught in a classroom environment where the instructor demonstrates the task using typical examples and students practise modelling in labs or tutorials. Although one-to-one human tutoring is the most effective mode of teaching, there will never be sufficient resources to provide individualised attention to each and every student. However, Intelligent Teaching Systems (ITS) offer bright prospects to fulfilling the goal of providing individualised pedagogical sessions to all students. Studies have shown that ITSs with problem-solving environments are ideal tools for enhancing learning in domains where extensive practice is essential. This thesis describes the design, implementation and evaluation of an ITS named KERMIT, developed for the popular database modelling technique, Entity Relationship (ER) modelling. KERMIT, the Knowledge-based Entity Relationship Modelling Intelligent Tutor, is developed as a problem-solving environment in which students can practice their ER modelling skills with the individualised assistance of the system. KERMIT presents a description of a scenario for which the student models a database using ER modelling constructs. The student can ask for guidance from the system during any stage of the problem solving process, and KERMIT evaluates the solution and presents feedback on its errors. The system adapts to each individual student by providing individualised hint messages and selecting new problems that best suit the student. The effectiveness of KERMIT was tested by three evaluations. The first was a think-aloud study to gain first-hand experience of the student's perception of the system. The second study, conducted as a classroom experiment, yielded some positive results, considering the time limitations and the instabilities of the system. The third evaluation, a similar classroom experiment, clearly demonstrated the effectiveness of KERMIT as a teaching system. Students were divided into an experimental group that interacted with KERMIT and a control group that used a conventional drawing tool to practice ER modelling. Both group's learning was monitored by pre- and post-tests, and a questionnaire recorded their perception of the system. The results of the study showed that students using KERMIT showed a significantly higher gain in their post-test. Their responses to the questionnaire reaffirmed their positive perception of KERMIT. The usefulness of feedback from the system and the amount learnt from the system was also on a significantly higher scale. Their free-form comments were also very positive

    Representing knowledge patterns in a conceptual database design aid : a dual-base knowledge model

    Get PDF
    The current status of the Knowledge-Based Database Design Systems (KBDDSs) is reviewed. It is shown that they do not resolve the problems of the identification of the relevant objects (relations) and the interpretation of the identified objects from the semantic-rich reality. Consequently, a theoretical architecture is developed to alleviate these problems by reusing the finished conceptual data schemata. By taking account of the essence of the reality and the problem-solving behaviour of experts, a new knowledge model called the Dual-Base Knowledge Model (DBKM), which involves two syngeristic knowledge structures, the concept and case bases, is constructed by the theories of conceptual knowledge in the psychological realm and the notions of relation and function from set theory. The aim is to provide rational and valid grounds for the support and interplay of these two bases in order to reuse the relevant old cases and facilitate the acquisition of new cases. Thus, the process model, which involves two process mechanisms, the case retrieval and knowledge accumulation mechanisms, is analysed according to the theory of the proposed DBKM. In this way, the feasibility of reusing the relevant schemata or part of them can be established in the DBKM architecture. The functionality of the DBKM architecture is tested by a simulated example to show how the relevant cases are recalled in the knowledge pool and the new knowledge is stored in the knowledge repository. The distinctions between the DBKM architecture and the frameworks of current KBDDSs and Case-Based Reasoning (CBR) systems (from the knowledge-based system view), and between the DBKM and those knowledge models in current KBDDSs and rule-based data modelling approaches (from the knowledge-modelling view) are investigated to contrast the current levels of progress of the conceptual data modelling. This research establishes the feasibility of the DBKM architecture, although it demonstrates the need to accommodate the dynamic and functional aspects of the Universe of Discourse (UoD). The main contributions of the DBKM are (1) to provide a valid basis for complementing the environments supported by the current KBDDSs and a rational basis for creating the symbiosis of humans and computer; and (2) to moderate the beliefs underlying the fact-based school and provide a hermeneutic environment, so that the confusion of the current conceptualising work can be alleviated and the difficulty of the conceptualising task can be eased to some degree
    corecore