19 research outputs found
A Deterministic Self-Organizing Map Approach and its Application on Satellite Data based Cloud Type Classification
A self-organizing map (SOM) is a type of competitive artificial neural network, which projects the high dimensional input space of the training samples into a low dimensional space with the topology relations preserved. This makes SOMs supportive of organizing and visualizing complex data sets and have been pervasively used among numerous disciplines with different applications. Notwithstanding its wide applications, the self-organizing map is perplexed by its inherent randomness, which produces dissimilar SOM patterns even when being trained on identical training samples with the same parameters every time, and thus causes usability concerns for other domain practitioners and precludes more potential users from exploring SOM based applications in a broader spectrum. Motivated by this practical concern, we propose a deterministic approach as a supplement to the standard self-organizing map. In accordance with the theoretical design, the experimental results with satellite cloud data demonstrate the effective and efficient organization as well as simplification capabilities of the proposed approach
Detecting Limb Movements by Reading Minds
Ved hjelp av EEG kan elektrisk aktivitet på hodebunnen til mennesker brukes til å kontrollere en datamaskin. I senere år har maskinlæringsteknikker gjort slike systemer nøyaktigere og istand til å tilpasse seg individet. I oppgaven implementeres "Sub-Band Common Spatial Patterns"-metoden for EEG klassifisering. Denne ble så forsøkt utvidet på forskjellige måter, med den hensikt å øke nøyaktigheten. Det ble oppdaget at nøyaktighet kan økes ved å:
1) Regularisere estimatet av kovariansematrisen for Common Spatial Patterns algoritmen.
2) Å legge til utviklingen av signalstyrken over tid som innputt til klassifisatoren.
3) Å bruke L1-regularisert Logistic Regression som klassifisator, og for å eliminere features til klassifisatoren.
4) Å bruke boosting på den endelige klassifisatoren.
Til sammen ga disse endringene en økning fra 86.4% til 91.7% nøyaktighet på det offentlig tilgjengelige datasettet BCI Competition III (IVa)
SE4SEE: A grid-enabled search engine for South-East Europe
Search Engine for South-East Europe (SE4SEE) is an application project aiming to develop a grid-enabled search engine that specifically targets the countries in the South-East Europe. It is one of the two selected regional applications currently implemented in the SEE-GRID FP6 project. This paper describes the design details of SE4SEE and provides an architectural overview of the application
Transformer-Based Models Are Not Yet Perfect At Learning to Emulate Structural Recursion
This paper investigates the ability of transformer-based models to learn
structural recursion from examples. Recursion is a universal concept in both
natural and formal languages. Structural recursion is central to the
programming language and formal mathematics tasks where symbolic tools
currently excel beyond neural models, such as inferring semantic relations
between datatypes and emulating program behavior. We introduce a general
framework that nicely connects the abstract concepts of structural recursion in
the programming language domain to concrete sequence modeling problems and
learned models' behavior. The framework includes a representation that captures
the general \textit{syntax} of structural recursion, coupled with two different
frameworks for understanding their \textit{semantics} -- one that is more
natural from a programming languages perspective and one that helps bridge that
perspective with a mechanistic understanding of the underlying transformer
architecture.
With our framework as a powerful conceptual tool, we identify different
issues under various set-ups. The models trained to emulate recursive
computations cannot fully capture the recursion yet instead fit short-cut
algorithms and thus cannot solve certain edge cases that are under-represented
in the training distribution. In addition, it is difficult for state-of-the-art
large language models (LLMs) to mine recursive rules from in-context
demonstrations. Meanwhile, these LLMs fail in interesting ways when emulating
reduction (step-wise computation) of the recursive function.Comment: arXiv admin note: text overlap with arXiv:2305.1469
A Survey on Fairness-aware Recommender Systems
As information filtering services, recommender systems have extremely
enriched our daily life by providing personalized suggestions and facilitating
people in decision-making, which makes them vital and indispensable to human
society in the information era. However, as people become more dependent on
them, recent studies show that recommender systems potentially own
unintentional impacts on society and individuals because of their unfairness
(e.g., gender discrimination in job recommendations). To develop trustworthy
services, it is crucial to devise fairness-aware recommender systems that can
mitigate these bias issues. In this survey, we summarise existing methodologies
and practices of fairness in recommender systems. Firstly, we present concepts
of fairness in different recommendation scenarios, comprehensively categorize
current advances, and introduce typical methods to promote fairness in
different stages of recommender systems. Next, after introducing datasets and
evaluation metrics applied to assess the fairness of recommender systems, we
will delve into the significant influence that fairness-aware recommender
systems exert on real-world industrial applications. Subsequently, we highlight
the connection between fairness and other principles of trustworthy recommender
systems, aiming to consider trustworthiness principles holistically while
advocating for fairness. Finally, we summarize this review, spotlighting
promising opportunities in comprehending concepts, frameworks, the balance
between accuracy and fairness, and the ties with trustworthiness, with the
ultimate goal of fostering the development of fairness-aware recommender
systems.Comment: 27 pages, 9 figure
Proceedings of the Scientific-Practical Conference "Research and Development - 2016"
talent management; sensor arrays; automatic speech recognition; dry separation technology; oil production; oil waste; laser technolog
Proceedings of the Scientific-Practical Conference "Research and Development - 2016"
talent management; sensor arrays; automatic speech recognition; dry separation technology; oil production; oil waste; laser technolog
Analysis of textural image features for content based retrieval
Digital archaelogy and virtual reality with archaeological artefacts have been quite hot research topics in the last years 55,56 . This thesis is a preperation study to build the background knowledge required for the research projects, which aim to computerize the reconstruction of the archaelogical data like pots, marbles or mosaic pieces by shape and ex ural features. Digitalization of the cultural heritage may shorten the reconstruction time which takes tens of years currently 61 ; it will improve the reconstruction robustness by incorporating with the literally available machine vision algorithms and experiences from remote experts working on a no-cost virtual object together. Digitalization can also ease the exhibition of the results for regular people, by multiuser media applications like internet based virtual museums or virtual tours. And finally, it will make possible to archive values with their original texture and shapes for long years far away from the physical risks that the artefacts currently face. On the literature 1,2,3,5,8,11,14,15,16 , texture analysis techniques have been throughly studied and implemented for the purpose of defect analysis purposes by image processing and machine vision scientists. In the last years, these algorithms have been started to be used for similarity analysis of content based image retrieval 1,4,10 . For retrieval systems, the concurrent problems seem to be building efficient and fast systems, therefore, robust image features haven't been focused enough yet. This document is the first performance review of the texture algorithms developed for retrieval and defect analysis together. The results and experiences gained during the thesis study will be used to support the studies aiming to solve the 2D puzzle problem using textural continuity methods on archaelogical artifects, Appendix A for more detail. The first chapter is devoted to learn how the medicine and psychology try to explain the solutions of similiarity and continuity analysis, which our biological model, the human vision, accomplishes daily. In the second chapter, content based image retrieval systems, their performance criterias, similiarity distance metrics and the systems available have been summarized. For the thesis work, a rich texture database has been built, including over 1000 images in total. For the ease of the users, a GUI and a platform that is used for content based retrieval has been designed; The first version of a content based search engine has been coded which takes the source of the internet pages, parses the metatags of images and downloads the files in a loop controlled by our texture algorithms. The preprocessing algorithms and the pattern analysis algorithms required for the robustness of the textural feature processing have been implemented. In the last section, the most important textural feature extraction methods have been studied in detail with the performance results of the codes written in Matlab and run on different databases developed