7,245 research outputs found
Constructing multiple unique input/output sequences using metaheuristic optimisation techniques
Multiple unique input/output sequences (UIOs) are often used to generate robust and compact test sequences in finite state machine (FSM) based testing. However, computing UIOs is NP-hard. Metaheuristic optimisation techniques (MOTs) such as genetic algorithms (GAs) and simulated annealing (SA) are effective in providing good solutions for some NP-hard problems. In the paper, the authors investigate the construction of UIOs by using MOTs. They define a fitness function to guide the search for potential UIOs and use sharing techniques to encourage MOTs to locate UIOs that are calculated as local optima in a search domain. They also compare the performance of GA and SA for UIO construction. Experimental results suggest that, after using a sharing technique, both GA and SA can find a majority of UIOs from the models under test
Interoperability and Standards: The Way for Innovative Design in Networked Working Environments
Organised by: Cranfield UniversityIn todayâs networked economy, strategic business partnerships and outsourcing has become the dominant
paradigm where companies focus on core competencies and skills, as creative design, manufacturing, or
selling. However, achieving seamless interoperability is an ongoing challenge these networks are facing,
due to their distributed and heterogeneous nature. Part of the solution relies on adoption of standards for
design and product data representation, but for sectors predominantly characterized by SMEs, such as the
furniture sector, implementations need to be tailored to reduce costs. This paper recommends a set of best
practices for the fast adoption of the ISO funStep standard modules and presents a framework that enables
the usage of visualization data as a way to reduce costs in manufacturing and electronic catalogue design.Mori Seiki â The Machine Tool Compan
The history of WiMAX: a complete survey of the evolution in certification and standarization for IEEE 802.16 and WiMAX
Most researchers are familiar with the technical features of WiMAX technology but the evolution that WiMAX went through, in terms of standardization and certification, is missing and unknown to most people. Knowledge of this historical process would however aid to understand how WiMAX has become the widespread technology that it is today. Furthermore, it would give insight in the steps to undertake for anyone aiming at introducing a new wireless technology on a worldwide scale. Therefore, this article presents a survey on all relevant activities that took place within three important organizations: the 802.16 Working Group of the IEEE (Institute of Electrical and Electronics Engineers) for technology development and standardization, the WiMAX Forum for product certification and the ITU (International Telecommunication Union) for international recognition. An elaborated and comprehensive overview of all those activities is given, which reveals the importance of the willingness to innovate and to continuously incorporate new ideas in the IEEE standardization process and the importance of the WiMAX Forum certification label granting process to ensure interoperability. We also emphasize the steps that were taken in cooperating with the ITU to improve the international esteem of the technology. Finally, a WiMAX trend analysis is made. We showed how industry interest has fluctuated over time and quantified the evolution in WiMAX product certification and deployments. It is shown that most interest went to the 2.5 GHz and 3.5GHz frequencies, that most deployments are in geographic regions with a lot of developing countries and that the highest people coverage is achieved in Asia Pacific. This elaborated description of all standardization and certification activities, from the very start up to now, will make the reader comprehend how past and future steps are taken in the development process of new WiMAX features
Evaluating web accessibility and usability for totally blind users at Thailand Cyber University
Thesis (Ed.D.)--Boston UniversityResearch suggests that web-based education increases opportunities for underserved populations to be integrated into educational activities (Schmetzke, 2001; Burgstahler, 2002; Opitz, Savenye, & Rowland, 2003). This may be true for students with disabilities because they have more flexibility to participate in formal education. However, Moisey (2004) found that people with disabilities had lower rates of enrollment and educational achievement than people without disabilities. These findings raise the question of whether or not web-based = education helps increase students with disabilities' access to learning opportunities and improve their learning outcome.
This study investigated the degree of difficulty blind persons had in accessing and using web-based educational resources provided by Thailand Cyber University (TCU). Based on a mixed methods design, the data were collected in two phases. Quantitative data were collected first, in order to identify accessibility problems and conformance levels reported by automated web accessibility evaluation tools. Qualitative data was collected from interviews with blind participants in the second phase to expand the understanding of the accessibility problems and usability issues that were not discovered in the quantitative phase by the automated web accessibility evaluation tools.
The findings indicate that all of the 13 selected web pages failed to meet a minimum requirement of WCAG 2.0. This means those selected web pages would be inaccessible for the blind. However, the findings indicate blind participants rated only one of the 13 pages as inaccessible. Moreover, their ratings of difficulty on "usability" were higher than their ratings of difficulty on "accessibility" on the same web page. On six out of 22 tasks, blind and sighted user groups agreed on the ratings. Nevertheless, the time that it took to complete each task varied greatly between the two user groups.2031-01-0
The Standard Problem
Crafting, adhering to, and maintaining standards is an ongoing challenge.
This paper uses a framework based on common models to explore the standard
problem: the impossibility of creating, implementing or maintain definitive
common models in an open system. The problem arises from uncertainty driven by
variations in operating context, standard quality, differences in
implementation, and drift over time. Fitting work by conformance services
repairs these gaps between a standard and what is required for interoperation,
using several strategies: (a) Universal conformance (all agents access the same
standard); (b) Mediated conformance (an interoperability layer supports
heterogeneous agents) and (c) Localized conformance, (autonomous adaptive
agents manage their own needs). Conformance methods include incremental design,
modular design, adaptors, and creating interactive and adaptive agents. Machine
learning should have a major role in adaptive fitting. Choosing a conformance
service depends on the stability and homogeneity of shared tasks, and whether
common models are shared ahead of time or are adjusted at task time. This
analysis thus decouples interoperability and standardization. While standards
facilitate interoperability, interoperability is achievable without
standardization.Comment: Keywords: information standard, interoperability, machine learning,
technology evaluation 25 Pages Main text word Count: 5108 Abstract word
count: 206 Tables: 1 Figures: 7 Boxes: 2 Submitted to JAMI
Recommended from our members
Enhancing Usability and Explainability of Data Systems
The recent growth of data science expanded its reach to an ever-growing user base of nonexperts, increasing the need for usability, understandability, and explainability in these systems. Enhancing usability makes data systems accessible to people with different skills and backgrounds alike, leading to democratization of data systems. Furthermore, proper understanding of data and data-driven systems is necessary for the users to trust the function of the systems that learn from data. Finally, data systems should be transparent: when a data system behaves unexpectedly or malfunctions, the users deserve proper explanation of what caused the observed incident. Unfortunately, most existing data systems offer limited usability and support for explanations: these systems are usable only by experts with sound technical skills, and even expert users are hindered by the lack of transparency into the systems\u27 inner workings and functions. The aim of my thesis is to bridge the usability gap between nonexpert users and complex data systems, aid all sort of users, including the expert ones, in data and system understanding, and provide explanations that help reason about unexpected outcomes involving data systems. Specifically, my thesis has the following three goals: (1) enhancing usability of data systems for nonexperts, (2) enable data understanding that can assist users in a variety of tasks such as achieving trust in data-driven machine learning, gaining data understanding, and data cleaning, and (3) explaining causes of unexpected outcomes involving data and data systems.
For enhancing usability, we focus on example-driven user intent discovery. We develop systems based on example-driven interactions in two different settings: querying relational databases and personalized document summarization. Towards data understanding, we develop a new data-profiling primitive that can characterize tuples for which a machine-learned model is likely to produce untrustworthy predictions. We also develop an explanation framework to explain causes of such untrustworthy predictions. Additionally, this new data-profiling primitive enables interactive data cleaning. Finally, we develop two explanation frameworks, tailored to provide explanations in debugging data system components, including the data itself. The explanation frameworks focus on explaining the root cause of a concurrent application\u27s intermittent failure and exposing issues in the data that cause a data-driven system to malfunction
SMA -- The Smyle Modeling Approach
This paper introduces the model-based software development lifecycle model SMA -- the Smyle Modeling Approach -- which is centered around Smyle. Smyle is a dedicated learning procedure to support engineers to interactively obtain design models from requirements, characterized as either being desired (positive) or unwanted (negative) system behavior. Within SMA, the learning approach is complemented by so-called scenario patterns where the engineer can specify clearly desired or unwanted behavior. This way, user interaction is reduced to the interesting scenarios limiting the design effort considerably. In SMA, the learning phase is further complemented by an effective analysis phase that allows for detecting design flaws at an early design stage. Using learning techniques allows us to gradually develop and refine requirements, naturally supporting evolving requirements, and allows for a rather inexpensive redesign in case anomalous system behavior is detected during analysis, testing, or maintenance. This paper describes the approach and reports on first practical experiences
- âŠ