14 research outputs found
The Impact of a ZigBee enabled Energy Management System on Electricity Consumption
The world currently faces an energy problem which is rooted in the inefficient use of energy resources. As a result, economies around the world are grappling to devise ways and means of solving this problem. This study postulates that the solution should consider reducing and managing consumption and use in order to be successful. As a result, this study explores the impact the implementation of an Energy Management System has on energy consumption and how it contributes towards sustainable environmental practices. The study is built upon the Deign Science theory and is conducted within the boundaries of Energy Informatics. It will employ an experimental design, using Multiple Linear Regression to derive a model that predicts energy consumption. The study seeks also to derive optimized setting of the system through the use of Response Surface Methodology (RSM). The viability of the study will then be assessed by conducting a Cost-Benefit Analysis
ARTIFACT EVALUATION IN INFORMATION SYSTEMS DESIGN-SCIENCE RESEARCH â A HOLISTIC VIEW
Design science in Information Systems (IS) research pertains to the creation of artifacts to solve reallife problems. Research on IS artifact evaluation remains at an early stage. In the design-science research literature, evaluation criteria are presented in a fragmented or incomplete manner. This paper addresses the following research questions: which criteria are proposed in the literature to evaluate IS artifacts? Which ones are actually used in published research? How can we structure these criteria? Finally, which evaluation methods emerge as generic means to assess IS artifacts? The artifact resulting from our research comprises three main components: a hierarchy of evaluation criteria for IS artifacts organized according to the dimensions of a system (goal, environment, structure, activity, and evolution), a model providing a high-level abstraction of evaluation methods, and finally, a set of generic evaluation methods which are instantiations of this model. These methods result from an inductive study of twenty-six recently published papers
BENCHMARKING CLASSIFIERS - HOW WELL DOES A GOWA-VARIANT OF THE SIMILARITY CLASSIFIER DO IN COMPARISON WITH SELECTED CLASSIFIERS?
Digital data is ubiquitous in nearly all modern businesses. Organizations have more data available, in various formats, than ever before. Machine learning algorithms and predictive analytics utilize the knowledge contained in that data, in order to help the business related decision-making. This study explores predictive analytics by comparing different classification methods â the main interest being in the Generalize Ordered Weighted Average (GOWA)-variant of the similarity classifier.
The target for this research is to find out how what is the GOWA-variant of the similarity classifier and how well it performs compared to other selected classifiers. This study also tries to investigate whether the GOWA-variant of the similarity classifier is a sufficient method to be used in the busi-ness related decision-making. Four different classical classifiers were selected as reference classifiers on the basis of their common usage in machine learning research, and on their availability in the Sta-tistics and Machine Learning Toolbox in MATLAB.
Three different data sets from UCI Machine Learning repository were used for benchmarking the classifiers. The benchmarking process uses fitness function instead of pure classification accuracy to determine the performance of the classifiers. Fitness function combines several measurement criteria into a one common value. With one data set, the GOWA-variant of the similarity classifier per-formed the best. One of the data sets contains credit card client data. It was more complex than the other two data sets and contains clearly business related data. The GOWA-variant performed also well with this data set. Therefore it can be claimed that the GOWA-variant of the similarity classifi-er is a viable option to be used also for solving business related problems
Applying blended conceptual spaces to variable choice and aesthetics in data visualisation
Computational creativity is an active area of research within the artificial intelligence domain that investigates what aspects of computing can be considered as an analogue to the human creative process. Computers can be programmed to emulate the type of things that the human mind can. Artificial creativity is worthy of study for two reasons. Firstly, it can help in understanding human creativity and secondly it can help with the design of computer programs that appear to be creative. Although the implementation of creativity in computer algorithms is an active field, much of the research fails to specify which of the known theories of creativity it is aligning with.
The combination of computational creativity with computer generated visualisations has the potential to produce visualisations that are context sensitive with respect to the data and could solve some of the current automation problems that computers experience. In addition theories of creativity could theoretically compute unusual data combinations, or introducing graphical elements that draw attention to the patterns in the data. More could be learned about the creativity involved as humans go about the task of generating a visualisation.
The purpose of this dissertation was to develop a computer program that can automate the generation of a visualisation, for a suitably chosen visualisation type over a small domain of knowledge, using a subset of the computational creativity criteria, in order to try and explore the effects of the introduction of conceptual blending techniques. The problem is that existing computer programs that generate visualisations are lacking the creativity, intuition, background information, and visual perception that enable a human to decide
what aspects of the visualisation will expose patterns that are useful to the consumer of the visualisation. The main research question that guided this dissertation was, âHow can criteria derived from theories of creativity be used in the generation of visualisations?â. In order to answer this question an analysis was done to determine which creativity theories and artificial intelligence techniques could potentially be used to implement the theories in the context of those relevant to computer generated visualisations. Measurable attributes and criteria that were sufficient for an algorithm that claims to model creativity were explored. The parts of the visualisation pipeline were identified and the aspects of visualisation generation that humans are better at than computers was explored. Themes that emerged in both the computational creativity and the visualisation literature were highlighted.
Finally a prototype was built that started to investigate the use of computational creativity methods in the âvariable choiceâ, and âaestheticsâ stages of the data visualisation pipeline.School of ComputingM. Sc. (Computing
Recommended from our members
The Evolution of Big Data and Its Business Applications
The arrival of the Big Data era has become a major topic of discussion in many sectors because of the premises of big data utilizations and its impact on decision-making. It is an interdisciplinary issue that has captured the attention of scholars and created new research opportunities in information science, business, heath care, and many others fields. The problem is the Big Data is not well defined, so that there exists confusion in IT what jobs and skill sets are required in big data area. The problem stems from the newness of the Big Data profession. Because many aspects of the area are unknown, organizations do not yet possess the IT, human, and business resources necessary to cope with and benefit from big data. These organizations include health care, enterprise, logistics, universities, weather forecasting, oil companies, e-business, recruiting agencies etc., and are challenged to deal with high volume, high variety, and high velocity big data to facilitate better decision- making. This research proposes a new way to look at Big Data and Big Data analysis. It helps and meets the theoretical and methodological foundations of Big Data and addresses an increasing demand for more powerful Big Data analysis from the academic researches prospective. Essay 1 provides a strategic overview of the untapped potential of social media Big Data in the business world and describes its challenges and opportunities for aspiring business organizations. It also aims to offer fresh recommendations on how companies can exploit social media data analysis to make better business decisionsâdecisions that embrace the relevant social qualities of its customers and their related ecosystem. The goal of this research is to provide insights for businesses to make better, more informed decisions based on effective social media data analysis. Essay 2 provides a better understanding of the influence of social media during the 2016 American presidential election and develops a model to examine individuals' attitudes toward participating in social media (SM) discussions that might influence their decision in choosing between the two presidential election candidates, Donald Trump and Hilary Clinton. The goal of this research is to provide a theoretical foundation that supports the influence of social media on individual's decisions. Essay 3 defines the major job descriptions for careers in the new Big Data profession. It to describe the Big Data professional profile as reflected by the demand side, and explains the differences and commonalities between company-posted job requirements for data analytics, business analytics, and data scientists jobs. The main aim for this work is to clarify of the skill requirements for Big Data professionals for the joint benefit of the job market where they will be employed and of academia, where such professionals will be prepared in data science programs, to aid in the entire process of preparing and recruiting for Big Data positions
Functional propotype of a performance management system for hospitality
Dissertação de mestrado, Direcção e Gestão Hoteleira, Escola Superior de Gestão, Hotelaria e Tursimo, Universidade do Algarve, 2014This project involved the creation and real-life evaluation in four hotels, of a functional
prototype of a performance management system specific for the hospitality industry,
with the objective of testing the viability to develop a commercial service.
This system can be defined as a set of dashboards that enable the systematic
monitoring of business information (goals, metrics and indicators - constructed from
multiple data sources) that facilitate management decision-making.
To assert its viability, a three point evaluation criteria was established: (1) that there
were no technical obstacles that could limit the systemâs scope or performance; (2)
that users would identify the benefits of using the system; and (3) that quantifiable
improvements could achieved.
The system was designed based on distributing computing and agent architecture
and its development followed the Design Science Research methodology, which also
demonstrated its suitability for management research projects.
With the prototypeâs development and its use by four hotels, it was possible to
confirm that no technical aspects would condition the commercial viability of the
system. The same can be said about the usersâ perception of the systemâs benefits,
as they identified a long list of benefits and situations where the system could be
used with better results than traditional decision making routines.
Although it was possible to verify that three of the participating hotels improved their
operational indicators when compared to the previous year, due to calendar
constraints and the inexistence of benchmarking data sets it was not possible to
produce evidence that those performance improvements could be attributed to the
use of the system.
Globally, these results - complemented by the request of all the participating hotels to
continue to use the prototype and their willingness to pay in the future for a
commercial service that provided the same information as the prototype - confirmed
itâs viability and commercial relevance
Digital forensic model for computer networks
The Internet has become important since information is now stored in digital form and is transported both within and between organisations in large amounts through computer networks. Nevertheless, there are those individuals or groups of people who utilise the Internet to harm other businesses because they can remain relatively anonymous. To prosecute such criminals, forensic practitioners have to follow a well-defined procedure to convict responsible cyber-criminals in a court of law. Log files provide significant digital evidence in computer networks when tracing cyber-criminals. Network log mining is an evolution of typical digital forensics utilising evidence from network devices such as firewalls, switches and routers. Network log mining is a process supported by presiding South African laws such as the Computer Evidence Act, 57 of 1983; the Electronic Communications and Transactions (ECT) Act, 25 of 2002; and the Electronic Communications Act, 36 of 2005. Nevertheless, international laws and regulations supporting network log mining include the Sarbanes-Oxley Act; the Foreign Corrupt Practices Act (FCPA) and the Bribery Act of the USA. A digital forensic model for computer networks focusing on network log mining has been developed based on the literature reviewed and critical thought. The development of the model followed the Design Science methodology. However, this research project argues that there are some important aspects which are not fully addressed by South African presiding legislation supporting digital forensic investigations. With that in mind, this research project proposes some Forensic Investigation Precautions. These precautions were developed as part of the proposed model. The Diffusion of Innovations (DOI) Theory is the framework underpinning the development of the model and how it can be assimilated into the community. The model was sent to IT experts for validation and this provided the qualitative element and the primary data of this research project. From these experts, this study found out that the proposed model is very unique, very comprehensive and has added new knowledge into the field of Information Technology. Also, a paper was written out of this research project
Managing Information Confidentiality Using the Chinese Wall Model to Reduce Fraud in Government Tenders
Instances of fraudulent acts are often headline news in the popular press in South Africa. Increasingly, these press reports point to the government tender process as being the main enabler used by the perpetrators committing the fraud. The cause of the tender fraud problem is confidentiality breach of information. This is accomplished, in part, by compromising the tender information contained in the government information system. This results in the biased award of a tender. Typically, the information in the tender process should be used to make decisions about a tenderâs specifications, solicitation, evaluation and adjudication. The sharing of said information to unauthorised persons can be used to manipulate and corrupt the process. This in turn corrupts the tender process by awarding a tender to an unworthy recipient. This research studies the generic steps in the tender process to understand how information is used to corrupt the tender process. It proposes that conflict of interest, together with a lack of information confidentiality in the information system, paves the way for possible tender fraud. Thereafter, a system of internal controls is examined within the South African government as well as in foreign countries to investigate measures taken to reduce the breach of confidential information in the tender process. By referring to the Common Criteria Security Model, various critical security areas within the tender process are identified. This measure is assisted with the ISO/IEC 27002 (2005) standard which has guiding principles for the management of confidential information. Thereafter, an information security policy,the Chinese Wall Model will be discussed as a means of reducing instances where conflict of interest may occur. Finally, an adapted Chinese Wall Model, which includes elements of the tender process, is presented as a way of reducing fraud in the government tender process. Finally, the research objective of this study is presented in the form of Critical Success Factors that aid in reducing the breach of confidential information in the tender process. As a consequence, tender fraud is reduced. These success factors have a direct and serious impact on the effectiveness of the Chinese Wall Model to secure the confidentiality of tender information. The proposed Critical Success Factors include: the Sanitisation Policy Document, an Electronic Document Management System, the Tender Evaluation Ethics Document, the Audit Trail Log and the Chinese Wall Model Prosecution Register
Blended learning in large class introductory programming courses: an empirical study in the context of an Ethiopian university
This study was motivated by a desire to address the challenges of introductory programming courses. Ethiopian universities teach such courses in large classes (80+ students) and students complain about the difficulty of the courses and teaching variation of instructors. The study was set to explore optimum course and learning environment design approaches. The research question raised was: how can a blended learning approach be used to improve large class teaching of programming? In an action design research approach, the study was initiated with redesigning two consecutive courses and a supportive blended learning environment on the basis of existing learning theories and educational design frameworks. Two cycles of action research were conducted for a dual goal of refinement and evaluation of the intervention. The action research was conducted during the 2012/13 academic year with 240 students at the beginning.
A predominantly quantitative first cycle of action research produced a mixed outcome. The studentsâ marks from assessment activities were fairly close to results from two other international universities. A pre- and post-implementation survey of studentsâ approach to learning showed a slight class level change towards the deep learning approach. Conversely, some students were found at-risk (not progressing well) and certain technologies, particularly program visualisation tools, were found underutilised.
The second action research cycle aimed to explain the result from the first round. A grounded action research evaluation of data from focus group discussions, interviews and participantsâ memos identified plausible factors for meaningful programming learning in a large class. These factors were use of collaborative and pair programming; alignment of learning and assignment activities; integrated use of e-learning; and use of large class strategies like student mentors and team teaching.
A critical realist interpretation of the result of the action research suggested that students can learn programming in large classes, 200+ in this study, with a course and learning environment design that keeps them engaged in learning and assessment activities. The study concludes that improved learning of programming can be possible with the use of students as mentors and changed role-dynamics of instructors, which presupposes adaptation of suitable pedagogical approaches and use of technologies.School of ComputingD. Litt. et Phil. (Information Systems
A framework for decision-making in ICT4D interventions to enable sustained benefit in resource-constrained environments
In the search to reduce the various divides between the developed and the
developing world, Information and Communication Technology (ICT) is seen as an
enabler in resource-constrained environments. However, the impact of ICT for
Development (ICT4D) implementations is contested, and the ability to facilitate
sustained change remains elusive.
Sustainability emerged as a key lesson from the failure of early ICT4D projects, and
has served as a focal point in facilitating ICT4D success. However, interpretation of
the concepts of sustainability and sustainable development seems to be multiple and
disconnected from practice, and is rarely translated into a useful construct for guiding
project-level actions.
The focus of international development is gradually shifting from donated aid towards
capability and choice, empowerment, and per-poor initiatives. However, the reality
remains that multiple organisations with varying levels of power, resources, and
influence determine the outcomes and the sustainability of benefits from a
development intervention.
This research investigates mechanisms to sustain benefit by exploring the interface
between various role players through the lens of decision-making. It builds on the
view that the value created by the virtual âorganisationâ of stakeholders in an ICT4D
implementation results from the sum of its decisions, and develops a framework for
decision-making with a view on sustaining benefits.
The work follows a Design Science Research methodology, comprising an iterative
process for the development, testing, and improvement of the framework based on
three literature reviews, two case studies, and an expert review.
The research answers the primary research question, namely:
What are the elements of a framework that support strategic decision-making for the design
and implementation of ICT4D interventions in resource-constrained environments, in support
of sustained benefit?
The knowledge contribution is primarily at the concept and methodological level. In
addition to framework development, the decision problem in ICT4D is defined, andthe concept of sustained benefit is proposed as a means of operationalizing
sustainability.
This research illustrates the role of decision concepts in structuring the complexity of
ICT4D problems. It introduces an alternative perspective into the debate on
sustainability in ICT4D, and provides a basis for the future development of theory.Information SystemsD. Litt. et Phil. (Information Systems