191 research outputs found

    On colour-blind distinguishing colour pallets in regular graphs

    Full text link

    Color-blind index in graphs of very low degree

    Get PDF
    Let c:E(G)[k]c:E(G)\to [k] be an edge-coloring of a graph GG, not necessarily proper. For each vertex vv, let cˉ(v)=(a1,,ak)\bar{c}(v)=(a_1,\ldots,a_k), where aia_i is the number of edges incident to vv with color ii. Reorder cˉ(v)\bar{c}(v) for every vv in GG in nonincreasing order to obtain c(v)c^*(v), the color-blind partition of vv. When cc^* induces a proper vertex coloring, that is, c(u)c(v)c^*(u)\neq c^*(v) for every edge uvuv in GG, we say that cc is color-blind distinguishing. The minimum kk for which there exists a color-blind distinguishing edge coloring c:E(G)[k]c:E(G)\to [k] is the color-blind index of GG, denoted dal(G)\operatorname{dal}(G). We demonstrate that determining the color-blind index is more subtle than previously thought. In particular, determining if dal(G)2\operatorname{dal}(G) \leq 2 is NP-complete. We also connect the color-blind index of a regular bipartite graph to 2-colorable regular hypergraphs and characterize when dal(G)\operatorname{dal}(G) is finite for a class of 3-regular graphs.Comment: 10 pages, 3 figures, and a 4 page appendi

    Instructional eLearning technologies for the vision impaired

    Get PDF
    The principal sensory modality employed in learning is vision, and that not only increases the difficulty for vision impaired students from accessing existing educational media but also the new and mostly visiocentric learning materials being offered through on-line delivery mechanisms. Using as a reference Certified Cisco Network Associate (CCNA) and IT Essentials courses, a study has been made of tools that can access such on-line systems and transcribe the materials into a form suitable for vision impaired learning. Modalities employed included haptic, tactile, audio and descriptive text. How such a multi-modal approach can achieve equivalent success for the vision impaired is demonstrated. However, the study also shows the limits of the current understanding of human perception, especially with respect to comprehending two and three dimensional objects and spaces when there is no recourse to vision

    An obstacle detection system for automated guided vehicles

    Get PDF
    The objective of this master's thesis is to investigate the utilization of computer vision and object detection as an integral part of an automated guided vehicle's navigation system, which operates within the facilities of the target company. The rationale for conducting this research and developing an application for this purpose arises from the inability of automated guided vehicles to detect smaller or partially obstructed objects, and the lack of differentiation between stationary and moving objects. These limitations pose a safety hazard and negatively impact the overall performance of the system. The anticipated outcome of this thesis is a proof-of-concept computer vision application that would enhance the automated guided vehicle's obstacle detection capacity. The primary aim is to offer practical insights to the target company regarding the practical implementation of computer vision by developing and training a YOLOv7 object detection model, as a proposed resolution to the research problem. A thorough theoretical part of the required technologies and tools for training an object detection model is followed by a plan for the application to define requirements for the object detection model. The training and development are conducted using open-source and standard software tools and libraries. Python is the primary programming language employed throughout the development process and the object detector itself constitutes a YOLOv7 (You Only Look Once) object detection algorithm. The model is trained to identify and classify a predetermined number of objects or obstacles that impede the present automated guided vehicle system. Model optimization follows a fundamental trial-and-error methodology and simulated testing of the best-performing model. The data required for training the object detection model is obtained by attaching a camera to an automated guided vehicle and capturing its movements within the target company's facilities. The gathered data is annotated using Label studio, and all necessary data preparation and processing are carried out using plain Python. The result of this master’s thesis was a proof of concept for a computer vision application that would improve and benefit the target company’s day-to-day operations in their production and storage facilities in Vaasa. The trained model was substantiated to perform up to expectations in terms of both speed and accuracy. This project not only demonstrated the application's benefits but also laid grounds for the business to further utilize machine learning and computer vision in other areas of their business regarding the operational improvement competency of the target company’s employees. The results of this master’s thesis showed that finding an optimal object detection model for a specific dataset within a reasonable timeframe requires both appropriate tools and sufficient research data premises in terms of model configuration. The trained model could be utilized as a foundation for similar projects and thereby reduce the time and costs involved in preliminary research efforts

    Efficient Decision Support Systems

    Get PDF
    This series is directed to diverse managerial professionals who are leading the transformation of individual domains by using expert information and domain knowledge to drive decision support systems (DSSs). The series offers a broad range of subjects addressed in specific areas such as health care, business management, banking, agriculture, environmental improvement, natural resource and spatial management, aviation administration, and hybrid applications of information technology aimed to interdisciplinary issues. This book series is composed of three volumes: Volume 1 consists of general concepts and methodology of DSSs; Volume 2 consists of applications of DSSs in the biomedical domain; Volume 3 consists of hybrid applications of DSSs in multidisciplinary domains. The book is shaped upon decision support strategies in the new infrastructure that assists the readers in full use of the creative technology to manipulate input data and to transform information into useful decisions for decision makers

    Visualizing genetic transmission patterns in plant pedigrees.

    Get PDF
    Ensuring food security in a world with an increasing population and demand on natural resources is becoming ever more pertinent. Plant breeders are using an increasingly diverse range of data types such as phenotypic and genotypic data to identify plant lines with desirable characteristics suitable to be taken forward in plant breeding programmes. These characteristics include a number of key morphological and physiological traits, such as disease resistance and yield that need to be maintained and improved upon if a commercial plant variety is to be successful.The ability to predict and understand the inheritance of alleles that facilitate resistance to pathogens or any other commercially important characteristic is crucially important to experimental plant genetics and commercial plant breeding programmes. However, derivation of the inheritance of such traits by traditional molecular techniques is expensive and time consuming, even with recent developments in high-throughput technologies. This is especially true in industrial settings where, due to time constraints relating to growing seasons, many thousands of plant lines may need to be screened quickly, efficiently and economically every year. Thus, computational tools that provide the ability to integrate and visualize diverse data types with an associated plant pedigree structure will enable breeders to make more informed and subsequently better decisions on the plant lines that are used in crossings. This will help meet both the demands for increased yield and production and adaptation to climate change.Traditional family tree style layouts are commonly used and simple to understand but are unsuitable for the data densities that are now commonplace in large breeding programmes. The size and complexity of plant pedigrees means that there is a cognitive limitation in conceptualising large plant pedigree structures, therefore novel techniques and tools are required by geneticists and plant breeders to improve pedigree comprehension.Taking a user-centred, iterative approach to design, a pedigree visualization system was developed for exploring a large and unique set of experimental barley (H. vulgare) data. This work progressed from the development of a static pedigree visualization to interactive prototypes and finally the Helium pedigree visualization software. At each stage of the development process, user feedback in the form of informal and more structured user evaluation from domain experts guided the development lifecycle with users’ concerns addressed and additional functionality added.Plant pedigrees are very different to those from humans and farmed animals and consequently the development of the pedigree visualizations described in this work focussed on implementing currently accepted techniques used in pedigree visualization and adapting them to meet the specific demands of plant pedigrees. Helium includes techniques to aid problems with user understanding identified through user testing; examples of these include difficulties where crosses between varieties are situated in different regions of the pedigree layout. There are good biological reasons why this happens but it has been shown, through testing, that it leads to problems with users’ comprehension of the relatedness of individuals in the pedigree. The inclusion of visual cues and the use of localised layouts have allowed complications like these to be reduced. Other examples include the use of sizing of nodes to show the frequency of usage of specific plant lines which have been shown to act as positional reference points to users, and subsequently bringing a secondary level of structure to the pedigree layout. The use of these novel techniques has allowed the classification of three main types of plant line, which have been coined: principal, flanking and terminal plant lines. This technique has also shown visually the most frequently used plant lines, which while previously known in text records, were never quantified.Helium’s main contributions are two-fold. Firstly it has applied visualization techniques used in traditional pedigrees and applied them to the domain of plant pedigrees; this has addressed problems with handling large experimental plant pedigrees. The scale, complexity and diversity of data and the number of plant lines that Helium can handle exceed other currently available plant pedigree visualization tools. These techniques (including layout, phenotypic and genotypic encoding) have been improved to deal with the differences that exist between human/mammalian pedigrees which take account of problems such as the complexity of crosses and routine inbreeding. Secondly, the verification of the effectiveness of the visualizations has been demonstrated by performing user testing on a group of 28 domain experts. The improvements have advanced both user understanding of pedigrees and allowed a much greater density and scale of data to be visualized. User testing has shown that the implementation and extensions to visualization techniques has improved user comprehension of plant pedigrees when asked to perform real-life tasks with barley datasets. Results have shown an increase in correct responses between the prototype interface and Helium. A SUS analysis has sown a high acceptance rate for Helium
    corecore