367 research outputs found
Artificially Human: Examining the Potential of Text-Generating Technologies in Online Customer Feedback Management
Online customer feedback management plays an increasingly important role for businesses. Yet providing customers with good responses to their reviews can be challenging, especially as the number of reviews grows. This paper explores the potential of using generative AI to formulate responses to customer reviews. Using advanced NLP techniques, we generated responses to reviews in different authoring configurations. To compare the communicative effectiveness of AI-generated and human-written responses, we conducted an online experiment with 502 participants. The results show that a Large Language Model performed remarkably well in this context. By providing concrete evidence of the quality of AI-generated responses, we contribute to the growing body of knowledge in this area. Our findings may have implications for businesses seeking to improve their customer feedback management strategies, and for researchers interested in the intersection of AI and customer feedback. This opens opportunities for practical applications of NLP and for further IS research
Construction safety and digital design: a review
As digital technologies become widely used in designing buildings and infrastructure, questions arise about
their impacts on construction safety. This review explores relationships between construction safety and
digital design practices with the aim of fostering and directing further research. It surveys state-of-the-art
research on databases, virtual reality, geographic information systems, 4D CAD, building information
modeling and sensing technologies, finding various digital tools for addressing safety issues in the
construction phase, but few tools to support design for construction safety. It also considers a literature on
safety critical, digital and design practices that raises a general concern about ‘mindlessness’ in the use of
technologies, and has implications for the emerging research agenda around construction safety and digital
design. Bringing these strands of literature together suggests new kinds of interventions, such as the
development of tools and processes for using digital models to promote mindfulness through multi-party
collaboration on safet
Developing Ontology for Malaysian Tourism
This project describes on the development of Malaysian Tourism Ontology and the
importance of ontology to make knowledge assets intelligently accessible to people in
organizations. Ontology is a long-lived conceptual model that can be used in multiple
applications, providing good opportunities for reuse and interoperability. Ontology
defines a common vocabulary for researchers who need to share information in a domain.
It includes machine-interpretable definitions of basic concepts in the domain and
relations among them. The purpose ontology being develops because to share the
common understanding of the structure of information among people and among
software agents. It can enable re-use of domain knowledge and can avoid "re-inventing
the wheel". If the ontology already exists, it cut short the development part and just
makes use of it. Ontology is aim to introduce standards to allow interoperability in order
to create a machine readable. Moreover, ontology is used to capture knowledge, create a
shared understanding between humans and for computers, make knowledge machine
processable and makes meaning explicit by definition and context. Domain knowledge
need to be analyzed in order to make it more meaningful and towards creating a machine
readable program. This ontology is created using Protege 3.2 Beta, an integrated software
tool used to develop knowledge-based system that includes a knowledge base about a
domain and programs that include rules for processing the knowledge and for solving
problems relating to the domain. This application is developed using Protege-3.2, an
advanced technology tool use in problem solving and decision-making in a particular
domain or field of knowledge
Recommended from our members
Investigation into the wafer-scale integration of fine-grain parallel processing computer systems
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.This thesis investigates the potential of wafer-scale integration (WSI) for the implementation of low-cost fine-grain parallel processing computer systems. As WSI is a relatively new subject, there was little work on which to base investigations. Indeed, most WSI architectures existed only as untried and sometimes vague proposals. Accordingly, the research strategy approached this problem by identifying a representative WSI structure and architecture on which to base investigations. An analysis of architectural proposals identified associative memory to be general purpose parallel processing component used in a wide range of WSI architectures. Furthermore, this analysis provided a set of WSI-level design requirements to evaluate the sustainability of different architectures as research vehicles. The WSI-ASP (WASP) device, which has a large associative memory as its main component is shown to meet these requirements and hence was chosen as the research vehicle. Consequently, this thesis addresses WSI potential through an in-depth investigation into the feasibility of implementing a large associative memory for the WASP device that meets the demanding technological constraints of WSI. Overall, the thesis concludes that WSI offers significant potential for the implementation of low-cost fine-grain parallel processing computer systems. However, due to the dual constraints of thermal management and the area required for the power distribution network, power density is a major design constraint in WSI. Indeed, it is shown that WSI power densities need to be an order of magnitude lower than VLSI power densities. The thesis demonstrates that for associative memories at least, VLSI designs are unsuited to implementation in WSI. Rather, it is shown that WSI circuits must be closely matched to the operational environment to assure suitable power densities. These circuits are significantly larger than their VLSI equivalents. Nonetheless, the thesis demonstrates that by concentrating on the most power intensive circuits, it is possible to achieve acceptable power densities with only a modest increase in area overheads.SER
On observational variance learning for multivariate Bayesian time series and related models
This thesis is concerned with variance learning in multivariate dynamic linear
models (DLMs).
Three new models are developed in this thesis. The first one is a dynamic
regression model with no distributional assumption of the unknown
variance matrix. The second is an extension of a known model that enables
comprehensive treatment of any missing observations. For this purpose new
distributions that replace the inverse Wishart and matrix T and that allow
conjugacy are introduced. The third model is the general multivariate DLM
without any precise assumptions of the error sequences and of the unknown
variance matrix. We find analytic updatings of the first two moments based
on weak assumptions that are satisfied for the usual models.
Missing observations and time varying variances are considered in detail
for every model. For the first time, deterministic and stochastic variance laws
for the general multivariate DLM are presented. Also, by introducing a new distribution that replaces the matrix-beta of a previous work, we prove results
on stochastic changes in variance that are in line with missing observation
analysis and variance intervention
INQUIRIES IN INTELLIGENT INFORMATION SYSTEMS: NEW TRAJECTORIES AND PARADIGMS
Rapid Digital transformation drives organizations to continually revitalize their business models so organizations can excel in such aggressive global competition. Intelligent Information Systems (IIS) have enabled organizations to achieve many strategic and market leverages. Despite the increasing intelligence competencies offered by IIS, they are still limited in many cognitive functions. Elevating the cognitive competencies offered by IIS would impact the organizational strategic positions.
With the advent of Deep Learning (DL), IoT, and Edge Computing, IISs has witnessed a leap in their intelligence competencies. DL has been applied to many business areas and many industries such as real estate and manufacturing. Moreover, despite the complexity of DL models, many research dedicated efforts to apply DL to limited computational devices, such as IoTs. Applying deep learning for IoTs will turn everyday devices into intelligent interactive assistants.
IISs suffer from many challenges that affect their service quality, process quality, and information quality. These challenges affected, in turn, user acceptance in terms of satisfaction, use, and trust. Moreover, Information Systems (IS) has conducted very little research on IIS development and the foreseeable contribution for the new paradigms to address IIS challenges. Therefore, this research aims to investigate how the employment of new AI paradigms would enhance the overall quality and consequently user acceptance of IIS.
This research employs different AI paradigms to develop two different IIS. The first system uses deep learning, edge computing, and IoT to develop scene-aware ridesharing mentoring. The first developed system enhances the efficiency, privacy, and responsiveness of current ridesharing monitoring solutions. The second system aims to enhance the real estate searching process by formulating the search problem as a Multi-criteria decision. The system also allows users to filter properties based on their degree of damage, where a deep learning network allocates damages in
12
each real estate image. The system enhances real-estate website service quality by enhancing flexibility, relevancy, and efficiency.
The research contributes to the Information Systems research by developing two Design Science artifacts. Both artifacts are adding to the IS knowledge base in terms of integrating different components, measurements, and techniques coherently and logically to effectively address important issues in IIS. The research also adds to the IS environment by addressing important business requirements that current methodologies and paradigms are not fulfilled. The research also highlights that most IIS overlook important design guidelines due to the lack of relevant evaluation metrics for different business problems
Deep Learning Models For Biomedical Data Analysis
The field of biomedical data analysis is a vibrant area of research dedicated to extracting valuable insights from a wide range of biomedical data sources, including biomedical images and genomics data. The emergence of deep learning, an artificial intelligence approach, presents significant prospects for enhancing biomedical data analysis and knowledge discovery. This dissertation focused on exploring innovative deep-learning methods for biomedical image processing and gene data analysis.
During the COVID-19 pandemic, biomedical imaging data, including CT scans and chest x-rays, played a pivotal role in identifying COVID-19 cases by categorizing patient chest x-ray outcomes as COVID-19-positive or negative. While supervised deep learning methods have effectively recognized COVID-19 patterns in chest x-ray datasets, the availability of annotated training data remains limited. To address this challenge, the thesis introduced a semi-supervised deep learning model named ssResNet, built upon the Residual Neural Network (ResNet) architecture. The model combines supervised and unsupervised paths, incorporating a weighted supervised loss function to manage data imbalance. The strategies to diminish prediction uncertainty in deep learning models for critical applications like medical image processing is explore. It achieves this through an ensemble deep learning model, integrating bagging deep learning and model calibration techniques. This ensemble model not only boosts biomedical image segmentation accuracy but also reduces prediction uncertainty, as validated on a comprehensive chest x-ray image segmentation dataset.
Furthermore, the thesis introduced an ensemble model integrating Proformer and ensemble learning methodologies. This model constructs multiple independent Proformers for predicting gene expression, their predictions are combined through weighted averaging to generate final predictions. Experimental outcomes underscore the efficacy of this ensemble model in enhancing prediction performance across various metrics.
In conclusion, this dissertation advances biomedical data analysis by harnessing the potential of deep learning techniques. It devises innovative approaches for processing biomedical images and gene data. By leveraging deep learning\u27s capabilities, this work paves the way for further progress in biomedical data analytics and its applications within clinical contexts.
Index Terms- biomedical data analysis, COVID-19, deep learning, ensemble learning, gene data analytics, medical image segmentation, prediction uncertainty, Proformer, Residual Neural Network (ResNet), semi-supervised learning
Naval Ship Maintenance: An Analysis of the Dutch Shipbuilding Industry Using the Knowledge Value Added, Systems Dynamics, and Integrated Risk Management Methodologies
Sponsored Report (for Acquisition Research Program)Initiatives to reduce the cost of ship maintenance have not yet realized the normal cost-reduction learning curve improvements. One explanation is the lack of recommended technologies. Damen, a Dutch shipbuilding and service firm, has incorporated similar technologies and is developing others to improve its operations. The research team collected data on Dutch ship maintenance operations and used them to build three types of computer simulation models of ship maintenance and technology adoption. The results were analyzed and compared with previously developed modeling results of U.S. Navy ship maintenance and technology adoption. Adopting 3D PDF alone improves ROI significantly more than adopting a logistics package alone and adding both technologies improves ROI more than adding either technology alone. Adoption of the technologies would provide cost benefits far in excess of not using the technologies and there were marginal benefits in sequentially implementing the technologies over immediately implementing them. There are a number of issues in comparing the results with previous research but the potential benefits of using the technologies are very high in both cases. Implications for acquisition practice include the need for careful analysis and selection from among a variety of available information technologies and the recommendation for a phased development and implementation approach to manage uncertainty.Acquisition Research Progra
Bayes Linear Variance Learning for Mixed Linear Temporal Models
Modelling of complex corroding industrial systems is ritical to effective inspection and maintenance for ssurance of system integrity. Wall thickness and corrosion
rate are modelled for multiple dependent corroding omponents, given observations of minimum wall thickness per component. At each inspection, partial observations of the system are considered. A Bayes Linear approach is adopted simplifying parameter estimation and avoiding often unrealistic distributional assumptions. Key system variances are modelled, making exchangeability assumptions to facilitate analysis for sparse inspection time-series. A utility based criterion is used to assess quality of inspection design and aid decision making. The model is applied to inspection data from pipework networks on a full-scale offshore platform
- …