396 research outputs found

    Internal Funding Newsletter, Academic Year 2019-2020

    Get PDF
    The University of Nebraska at Omaha is committed to fostering the academic and scholarly pursuits of faculty, staff, and students. While the 2019- 2020 Academic Year brought many changes and challenges, UNO has continued to invest in a multitude of funding programs to promote research and creative activity. This year’s programs provided over $530,000 for student and faculty projects that reflect the broad range of scholarly interests of the UNO community: the use of 3D printed models to improve the understanding of complex orthopedic trauma, the role of religion and spirituality in deterring sex offending, the romanticization of the highland warrior in British literature and culture, inducing social grooming to boost marmoset well-being, and pandemic related fake news identification based on unstructured data mining

    Finding Data Compatibility Bugs with JSON Subschema Checking

    Get PDF
    JSON is a data format used pervasively in web APIs, cloud computing, NoSQL databases, and increasingly also machine learning. To ensure that JSON data is compatible with an application, one can define a JSON schema and use a validator to check data against the schema. However, because validation can happen only once concrete data occurs during an execution, it may detect data compatibility bugs too late or not at all. Examples include evolving the schema for a web API, which may unexpectedly break client applications, or accidentally running a machine learning pipeline on incorrect data. This paper presents a novel way of detecting a class of data compatibility bugs via JSON subschema checking. Subschema checks find bugs before concrete JSON data is available and across all possible data specified by a schema. For example, one can check if evolving a schema would break API clients or if two components of a machine learning pipeline have incompatible expectations about data. Deciding whether one JSON schema is a subschema of another is non-trivial because the JSON Schema specification language is rich. Our key insight to address this challenge is to first reduce the richness of schemas by canonicalizing and simplifying them, and to then reason about the subschema question on simpler schema fragments using type-specific checkers. We apply our subschema checker to thousands of real-world schemas from different domains. In all experiments, the approach is correct whenever it gives an answer (100% precision and correctness), which is the case for most schema pairs (93.5% recall), clearly outperforming the state-of-the-art tool. Moreover, the approach reveals 43 previously unknown bugs in popular software, most of which have already been fixed, showing that JSON subschema checking helps finding data compatibility bugs early

    Automatic Sign Language Recognition from Image Data

    Get PDF
    Tato práce se zabývá problematikou automatického rozpoznávání znakového jazyka z obrazových dat. Práce představuje pět hlavních přínosů v oblasti tvorby systému pro rozpoznávání, tvorby korpusů, extrakci příznaků z rukou a obličeje s využitím metod pro sledování pozice a pohybu rukou (tracking) a modelování znaků s využitím menších fonetických jednotek (sub-units). Metody využité v rozpoznávacím systému byly využity i k tvorbě vyhledávacího nástroje "search by example", který dokáže vyhledávat ve videozáznamech podle obrázku ruky. Navržený systém pro automatické rozpoznávání znakového jazyka je založen na statistickém přístupu s využitím skrytých Markovových modelů, obsahuje moduly pro analýzu video dat, modelování znaků a dekódování. Systém je schopen rozpoznávat jak izolované, tak spojité promluvy. Veškeré experimenty a vyhodnocení byly provedeny s vlastními korpusy UWB-06-SLR-A a UWB-07-SLR-P, první z nich obsahuje 25 znaků, druhý 378. Základní extrakce příznaků z video dat byla provedena na nízkoúrovňových popisech obrazu. Lepších výsledků bylo dosaženo s příznaky získaných z popisů vyšší úrovně porozumění obsahu v obraze, které využívají sledování pozice rukou a metodu pro segmentaci rukou v době překryvu s obličejem. Navíc, využitá metoda dokáže interpolovat obrazy s obličejem v době překryvu a umožňuje tak využít metody pro extrakci příznaků z obličeje, které by během překryvu nefungovaly, jako např. metoda active appearance models (AAM). Bylo porovnáno několik různých metod pro extrakci příznaků z rukou, jako např. local binary patterns (LBP), histogram of oriented gradients (HOG), vysokoúrovnové lingvistické příznaky a nové navržená metoda hand shape radial distance function (hRDF). Bylo také zkoumáno využití menších fonetických jednotek, než jsou celé znaky, tzv. sub-units. Pro první krok tvorby těchto jednotek byl navržen iterativní algoritmus, který tyto jednotky automaticky vytváří analýzou existujících dat. Bylo ukázáno, že tento koncept je vhodný pro modelování a rozpoznávání znaků. Kromě systému pro rozpoznávání je v práci navržen a představen systém "search by example", který funguje jako vyhledávací systém pro videa se záznamy znakového jazyka a může být využit například v online slovnících znakového jazyka, kde je v současné době složité či nemožné v takovýchto datech vyhledávat. Tento nástroj využívá metody, které byly použity v rozpoznávacím systému. Výstupem tohoto vyhledávacího nástroje je seřazený seznam videí, které obsahují stejný nebo podobný tvar ruky, které zadal uživatel, např. přes webkameru.Katedra kybernetikyObhájenoThis thesis addresses several issues of automatic sign language recognition, namely the creation of vision based sign language recognition framework, sign language corpora creation, feature extraction, making use of novel hand tracking with face occlusion handling, data-driven creation of sub-units and "search by example" tool for searching in sign language corpora using hand images as a search query. The proposed sign language recognition framework, based on statistical approach incorporating hidden Markov models (HMM), consists of video analysis, sign modeling and decoding modules. The framework is able to recognize both isolated signs and continuous utterances from video data. All experiments and evaluations were performed on two own corpora, UWB-06-SLR-A and UWB-07-SLR-P, the first containing 25 signs and second 378. As a baseline feature descriptors, low level image features are used. It is shown that better performance is gained by higher level features that employ hand tracking, which resolve occlusions of hands and face. As a side effect, the occlusion handling method interpolates face area in the frames during the occlusion and allows to use face feature descriptors that fail in such a case, for instance features extracted from active appearance models (AAM) tracker. Several state-of-the-art appearance-based feature descriptors were compared for tracked hands, such as local binary patterns (LBP), histogram of oriented gradients (HOG), high-level linguistic features or newly proposed hand shape radial distance function (denoted as hRDF) that enhances the feature description of hand-shape like concave regions. The concept of sub-units, that uses HMM models based on linguistic units smaller than whole sign and covers inner structures of the signs, was investigated in the proposed iterative method that is a first required step for data-driven construction of sub-units, and shows that such a concept is suitable for sign modeling and recognition tasks. Except of experiments in the sign language recognition, additional tool \textit{search by example} was created and evaluated. This tool is a search engine for sign language videos. Such a system can be incorporated into an online sign language dictionary where it is difficult to search in the sign language data. This proposed tool employs several methods which were examined in the sign language recognition task and allows to search in the video corpora based on an user-given query that consists of one or multiple images of hands. As a result, an ordered list of videos that contain the same or similar hand configurations is returned

    Is gender encoded in the smile? A computational framework for the analysis of the smile driven dynamic face for gender recognition

    Get PDF
    YesAutomatic gender classification has become a topic of great interest to the visual computing research community in recent times. This is due to the fact that computer-based automatic gender recognition has multiple applications including, but not limited to, face perception, age, ethnicity, identity analysis, video surveillance and smart human computer interaction. In this paper, we discuss a machine learning approach for efficient identification of gender purely from the dynamics of a person’s smile. Thus, we show that the complex dynamics of a smile on someone’s face bear much relation to the person’s gender. To do this, we first formulate a computational framework that captures the dynamic characteristics of a smile. Our dynamic framework measures changes in the face during a smile using a set of spatial features on the overall face, the area of the mouth, the geometric flow around prominent parts of the face and a set of intrinsic features based on the dynamic geometry of the face. This enables us to extract 210 distinct dynamic smile parameters which form as the contributing features for machine learning. For machine classification, we have utilised both the Support Vector Machine and the k-Nearest Neighbour algorithms. To verify the accuracy of our approach, we have tested our algorithms on two databases, namely the CK+ and the MUG, consisting of a total of 109 subjects. As a result, using the k-NN algorithm, along with tenfold cross validation, for example, we achieve an accurate gender classification rate of over 85%. Hence, through the methodology we present here, we establish proof of the existence of strong indicators of gender dimorphism, purely in the dynamics of a person’s smile

    Life in the AI era - First result of the Erasmus+ HEDY project

    Get PDF
    HEDY - Life in the AI era is a 2-year Erasmus+ project started in November 2021 targeting higher education audience. Its goal is to offer a comprehensive and shared view of how Artificial Intelligence (AI) is affecting our lives and reshaping our socioeconomic, cultural, and human environments and to define which topics related to AI are of interest to different university studies and how they should be addressed. Four specific free and accessible sources of information will be produced to reach these goals, the first of which is the Booklet, the subject of this paper. The Booklet is an essay defining the HEDY position on life in the AI era and its aim is to identify the challenges, opportunities and expected impact of AI on four different areas: business, governance, skills & competencies, and people & lifestyle. In this paper, we summarise the content of the Booklet. In particular, we describe our methodology to build our rationales based on collecting information from two sources: i) Literature survey, and ii) Focus groups. These two sources provide a unique contribution on AI panorama by combining state of the art research with first-hand opinions and debated questions, concerns, and ideas of interacting individuals. The main finding is that there is the necessity to train citizens in AI by providing teachings, courses and trainings in schools and higher education institutes to facilitate the use and adoption of AI for young people and future generations

    The Parthenon, September 27, 1988

    Get PDF
    corecore