288 research outputs found

    Linking design intention and users' interpretation through image schemas

    Get PDF
    Usability is often defined as the ease of use of a product but this definition does not capture other important characteristics related to the product design as being effective, efficient, engaging, errorfree and easy to learn. Usability is not only about measuring how people use a product but more importantly, it is about exploring the relationship between how designers have intended their products to be used and how users interpret these designs. Previous research has shown the feasibility of using image schemas to evaluate intuitive interactions. This paper extends previous research by proposing a method, which uses image schemas to evaluate usability by measuring the gap between design intention and users’ interpretations of the design. The design intention is extracted from the user manual while the way users interpret the design features is captured using direct observation, think aloud protocol and a structured questionnaire. The proposed method is illustrated with a case study involving 42 participants. The results show close correlation between usability and the distance between design intent and users’ interpretation

    Multi-faceted Assessment of Trademark Similarity

    Get PDF
    Trademarks are intellectual property assets with potentially high reputational value. Their infringement may lead to lost revenue, lower profits and damages to brand reputation. A test normally conducted to check whether a trademark is highly likely to infringe other existing, already registered, trademarks is called a likelihood of confusion test. One of the most influential factors in this test is establishing similarity in appearance, meaning or sound. However, even though the trademark registration process suggests a multi-faceted similarity assessment, relevant research in expert systems mainly focuses on computing individual aspects of similarity between trademarks. Therefore, this paper contributes to the knowledge in this field by proposing a method, which, similar to the way people perceive trademarks, blends together the three fundamental aspects of trademark similarity and produces an aggregated score based on the individual visual, semantic and phonetic assessments. In particular, semantic similarity is a new aspect, which has not been considered by other researchers in approaches aimed at providing decision support in trademark similarity assessment. Another specific scientific contribution of this paper is the innovative integration, using a fuzzy engine, of three independent assessments, which collectively provide a more balanced and human-centered view on potential infringement problems. In addition, the paper introduces the concept of degree of similarity since the line between similar and dissimilar trademarks is not always easy to define especially when dealing with blending three very different assessments. The work described in the paper is evaluated using a database comprising 1,400 trademarks compiled from a collection of real legal cases of trademark disputes. The evaluation involved two experiments. The first experiment employed information retrieval measures to test the classification accuracy of the proposed method while the second used human collective opinion to examine correlations between the trademark scoring/rating and the ranking of the proposed method, and human judgment. In the first experiment, the proposed method improved the F-score, precision and accuracy of classification by 12.5%, 35% and 8.3%, respectively, against the best score computed using individual similarity. In the second experiment, the proposed method produced a perfect positive Spearman rank correlation score of 1.00 in the ranking task and a pairwise Pearson correlation score of 0.92 in the rating task. The test of significance conducted on both scores rejected the null hypotheses of the experiment and showed that both scores correlated well with collective human judgment. The combined overall assessment could add value to existing support systems and be beneficial for both trademark examiners and trademark applicants. The method could be further used in addressing recent cyberspace phenomena related to trademark infringement such as customer hijacking and cybersquatting. Keywords—Trademark assessment, trademark infringement, trademark retrieval, degree of similarity, fuzzy aggregation, semantic similarity, phonetic similarity, visual similarity

    Exploring user experience with image schemas, sentiments, and semantics

    Get PDF
    Although the concept of user experience includes two key aspects, experience of meaning (usability) and experience of emotion (affect), the empirical work that measures both the usability and affective aspects of user experience is currently limited. This is particularly important considering that affect could significantly influence a user’s perception of usability. This paper uses image schemas to quantitatively and systematically evaluate both these aspects. It proposes a method for evaluating user experience that is based on using image schemas, sentiment analysis, and computational semantics. The aim is to link the sentiments expressed by users during their interactions with a product to the specific image schemas used in the designs. The method involves semantic and sentiment analysis of the verbal responses of the users to identify (i) task-related words linked to the task for which a certain image schema has been used and (ii) affect-related words associated with the image schema employed in the interaction. The main contribution is in linking image schemas with interaction and affect. The originality of the method is twofold. First, it uses a domain-specific ontology of image schemas specifically developed for the needs of this study. Second, it employs a novel ontology-based algorithm that extracts the image schemas employed by the user to complete a specific task and identifies and links the sentiments expressed by the user with the specific image schemas used in the task. The proposed method is evaluated using a case study involving 40 participants who completed a set task with two different products. The results show that the method successfully links the users’ experiences to the specific image schemas employed to complete the task. This method facilitates significant improvements in product design practices and usability studies in particular, as it allows qualitative and quantitative evaluation of designs by identifying specific image schemas and product design features that have been positively or negatively received by the users. This allows user experience to be assessed in a systematic way, which leads to a better understanding of the value associated with particular design features

    Eco-design case-based reasoning tool: the integration of ecological quality function deployment and case-based reasoning methods for supporting sustainable product design

    Get PDF
    Several methods and tools have been developed to facilitate sustainable product design, but they lack critical application of the ecological design (eco-design) process and economic costing, particularly during the conceptual design phase. This research study overcomes these deficiencies by integrating eco-design approaches across all phases of the product life cycle. It proposes an eco-design case-based reasoning tool that is integrated with the recently developed ecological quality function deployment method, which supports sustainable product design. The eco-design case-based reasoning tool is an intuitive decision-support tool that complements the ecological quality function deployment method and proposes solutions related to customers’ requirements and the environmental and economic impacts of the product. The ecological quality function deployment method ensures that customers’ needs are considered within the context of product sustainability. The novelty of this article is in the development of the eco-design case-based reasoning tool which is based on the premise that if experiences from the ecological quality function deployment process can be captured in some useful form, designers can refer to and learn from them. This approach helps industrial decision-makers propose solutions by reusing solutions from similar cases and from their past experiences. The novelty is in the way the cases are structured and new cases are generated, using life-cycle assessments, cost estimations, and information about related manufacturing processes and means of transportation. This article demonstrates the applicability of the proposed approach through an industrial case study

    Discrete element simulation of powder layer thickness in laser additive manufacturing

    Get PDF
    The optimisation of the laser additive manufacturing (AM) process is a challenging task when a new material is considered. Compared to the selection of other process parameters such as laser power, scanning speed and hatch spacing, the optimisation of powder layer thickness is much more time-consuming and costly because a new run is normally needed when the layer thickness value is changed. In practice, the layer thickness is fixed to a value that is slightly higher than the average particle size. This paper introduces a systematic approach to layer thickness optimisation based on a theoretical model of the interactions between the particles, the wiper and the build plate during the powder deposition. The focus is on a systematic theoretical and experimental investigation of the effect of powder layer thickness on various powder bed characteristics during single-layer and multi-layer powder deposition. The theoretical model was tested experimentally using Hastelloy X (HX) with an average particle size of 34.4 ÎĽm. The experimental results validated the simulation model, which predicted a uniform powder bed deposition when employing a 40 ÎĽm layer thickness value. Lower (30 ÎĽm) and higher (50 ÎĽm) layer thickness values resulted in large voids and short-feed defects, respectively. The subsequent optimisation of the scanning speed and hatch spacing parameters was executed using a 40 ÎĽm layer thickness. The optimum process parameters were then used to examine the microstructure and tensile performance of the as-fabricated HX. This study provides an improved understanding of the powder deposition process and offers insights into the selection of suitable powder layer thicknesses in laser AM

    Semantic reasoning in cognitive networks for heterogeneous wireless mesh systems

    Get PDF
    The next generation of wireless networks is expected to provide not only higher bandwidths anywhere and at any time but also ubiquitous communication using different network types. However, several important issues including routing, self-configuration, device management, and context awareness have to be considered before this vision becomes reality. This paper proposes a novel cognitive network framework for heterogeneous wireless mesh systems to abstract the network control system from the infrastructure by introducing a layer that separates the management of different radio access networks from the data transmission. This approach simplifies the process of managing and optimizing the networks by using extendable smart middleware that automatically manages, configures, and optimizes the network performance. The proposed cognitive network framework, called FuzzOnto, is based on a novel approach that employs ontologies and fuzzy reasoning to facilitate the dynamic addition of new network types to the heterogeneous network. The novelty is in using semantic reasoning with cross-layer parameters from heterogeneous network architectures to manage and optimize the performance of the networks. The concept is demonstrated through the use of three network architectures: 1) wireless mesh network; 2) long-term evolution (LTE) cellular network; and 3) vehicular ad hoc network (VANET). These networks utilize nonoverlapped frequency bands and can operate simultaneously with no interference. The proposed heterogeneous network was evaluated using ns-3 network simulation software. The simulation results were compared with those produced by other networks that utilize multiple transmission devices. The results showed that the heterogeneous network outperformed the benchmark networks in both urban and VANET scenarios by up to 70% of the network throughput, even when the LTE network utilized a high bandwidth

    Synthesis and characterisation of advanced ball-milled Al-Al2O3 nanocomposites for selective laser melting

    Get PDF
    Selective laser melting (SLM) offers significant potential for the manufacture of the advanced complex-shaped aluminium matrix composites (AMCs) used in the aerospace and automotive domains. Previous studies have indicated that advanced composite powders suitable for SLM include spherical powders with homogeneous reinforcement distribution, a particle size of < 100 ÎĽm and good flowability (Carr index < 15%); however, the production of such composite powders continues to be a challenge. Due to the intensive impacts of grinding balls, the high-energy ball-milling (HEBM) process has been employed to refine Al particles and disperse the nano Al2O3 reinforcements in the Al matrix to improve their mechanical properties. Notwithstanding, the specific characteristics of ball-milled powders for SLM and the effect of milling and pause duration on the fabrication of composite powders have not previously been investigated. The aim of this study was to synthesise Al-4 vol.% Al2O3 nano-composite powders using HEBM with two different types of milling and pause combinations. The characteristics of the powders subjected to up to 20 h of milling were investigated. The short milling (10 min) and long pause (15 min) combination provided a higher yield (66%) and narrower particle size distribution range than long milling (15 min) and a short pause (5 min). The nano Al2O3 reinforcements were observed to be dispersed uniformly after 20 h of milling, and the measured Carr index of 13.2% indicated that the ball-milled powder offered good flowability. Vickers micro-hardness tests indicated that HEBM significantly improved the mechanical properties of the ball-milled powders

    Integrated Analysis of EEG and eye tracking to measure emotional responses in a simulated healthcare setting

    Get PDF
    Electroencephalography (EEG) and eye tracking devices are used in this study to assess the capability of such systems to measure emotional responses in a healthcare-related environment. Experiments are conducted in which positive, negative and neutral stimuli are presented to participants and data is captured from both systems simultaneously. Images from the International Affective Picture System (IAPS) are employed to trigger standardised emotion states and calibrate the experiment, whilst images from a medical drama are used to provide hospital-based stimuli. It is found that EEG and eye tracking can successfully indicate emotion features, with the EEG data providing better visualisation, whilst eye metrics are more meaningful with statistics. Both devices show that the emotional responses to hospital-based images differ to the responses from standardised images. Greater variation between participants in the hospital-based stimuli indicates that personal experiences from healthcare related events can influence emotional responses to related stimuli

    Clock drawing test digit recognition using static and dynamic features

    Get PDF
    The clock drawing test (CDT) is a standard neurological test for detection of cognitive impairment. A computerised version of the test promises to improve the accessibility of the test in addition to obtaining more detailed data about the subject's performance. Automatic handwriting recognition is one of the first stages in the analysis of the computerised test, which produces a set of recognized digits and symbols together with their positions on the clock face. Subsequently, these are used in the test scoring. This is a challenging problem because the average CDT taker has a high likelihood of cognitive impairment, and writing is one of the first functional activities to be affected. Current handwritten digit recognition system perform less well on this kind of data due to its unintelligibility. In this paper, a new system for numeral handwriting recognition in the CDT is proposed. The system is based on two complementary sources of data, namely static and dynamic features extracted from handwritten data. The main novelty of this paper is the new handwriting digit recognition system, which combines two classifiers—fuzzy k-nearest neighbour for dynamic stroke-based features and convolutional neural network for static image- based features, which can take advantage of both static and dynamic data. The proposed digit recognition system is tested on two sets of data: first, Pendigits online handwriting digits; and second, digits from the actual CDTs. The latter data set came from 65 drawings made by healthy people and 100 drawings reproduced from the drawings by dementia patients. The test on both data sets shows that the proposed combination system can outperform each classifier individually in terms of recognition accuracy, especially when assessing the handwriting of people with dementi

    Gaze trajectory prediction in the context of social robotics

    Get PDF
    Social robotics is an emerging field of robotics that focuses on the interactions between robots and humans. It has attracted much interest due to concerns about an aging society and the need for assistive environments. Within this context, this paper focuses on gaze control and eye tracking as a means for robot control. It aims to improve the usability of human–machine interfaces based on gaze control by developing advanced algorithms for predicting the trajectory of the human gaze. The paper proposes two approaches to gaze-trajectory prediction: probabilistic and symbolic. Both approaches use machine learning. The probabilistic method mixes two state models representing gaze locations and directions. The symbolic method treats the gaze-trajectory prediction problem similar to how word-prediction problems are handled in web browsers. Comparative experiments prove the feasibility of both approaches and show that the probabilistic approach achieves better prediction results
    • …
    corecore