1,453 research outputs found

    Special Libraries, January 1932

    Get PDF
    Volume 23, Issue 1https://scholarworks.sjsu.edu/sla_sl_1932/1000/thumbnail.jp

    Special Libraries, February 1960

    Get PDF
    Volume 51, Issue 2https://scholarworks.sjsu.edu/sla_sl_1960/1001/thumbnail.jp

    Your "Flamingo" is My "Bird": Fine-Grained, or Not

    Full text link
    Whether what you see in Figure 1 is a "flamingo" or a "bird", is the question we ask in this paper. While fine-grained visual classification (FGVC) strives to arrive at the former, for the majority of us non-experts just "bird" would probably suffice. The real question is therefore -- how can we tailor for different fine-grained definitions under divergent levels of expertise. For that, we re-envisage the traditional setting of FGVC, from single-label classification, to that of top-down traversal of a pre-defined coarse-to-fine label hierarchy -- so that our answer becomes "bird"-->"Phoenicopteriformes"-->"Phoenicopteridae"-->"flamingo". To approach this new problem, we first conduct a comprehensive human study where we confirm that most participants prefer multi-granularity labels, regardless whether they consider themselves experts. We then discover the key intuition that: coarse-level label prediction exacerbates fine-grained feature learning, yet fine-level feature betters the learning of coarse-level classifier. This discovery enables us to design a very simple albeit surprisingly effective solution to our new problem, where we (i) leverage level-specific classification heads to disentangle coarse-level features with fine-grained ones, and (ii) allow finer-grained features to participate in coarser-grained label predictions, which in turn helps with better disentanglement. Experiments show that our method achieves superior performance in the new FGVC setting, and performs better than state-of-the-art on traditional single-label FGVC problem as well. Thanks to its simplicity, our method can be easily implemented on top of any existing FGVC frameworks and is parameter-free.Comment: Accepted as an oral of CVPR2021. Code: https://github.com/PRIS-CV/Fine-Grained-or-No

    Special Libraries, November 1936

    Get PDF
    Volume 27, Issue 9https://scholarworks.sjsu.edu/sla_sl_1936/1008/thumbnail.jp

    Genres in young learner L2 English writing: A genre typology for the TRAWL (Tracking Written Learner Language) corpus

    Get PDF
    In learner corpus research, it is well known that one should control for genre when collecting and analysing written L2 (second language) English data, as genre is one factor that has been shown to account for language variation. This article presents a genre typology for annotating learner texts from the lower secondary level in Norway (ages 13-15, school years 8-10). The data are drawn from TRAWL (Tracking Written Learner Language), a new learner corpus currently under compilation. As the TRAWL corpus will be openly available for research, it is important that the typology is clearly described, which is the primary aim of the present study. Little research has been carried out on younger learners, and no detailed genre typology exists for classifying learner texts at the lower secondary level. Therefore, a genre typology developed by Ørevik (2019) for the upper secondary level was tested on data from TRAWL using a functional, social semiotic perspective and a mixed-methods (quantitative and qualitative) approach. The analysis showed that Ørevik’s typology was largely suitable for annotating the selected TRAWL data and only had to be slightly modified. By highlighting some of the theoretical and methodological challenges with the genre typology, the analysis may inform discussions about genre in L2 English teaching, which was a secondary aim of the present study. Not only do the results mirror the tensions in the international debate within genre research, they also mirror the everyday challenges of lower secondary school teachers/examiners, who seem to adopt an eclectic approach to genre.publishedVersio

    En Introduksjon til Kunstig Syn i Autonom Kjøring

    Get PDF
    Autonom kjøring er en av de fremtredende teknologiene i dagens samfunn. Et bredt spekter av applikasjoner bruker derfor denne teknologien for fordelene den gir. For eksempel vil en autonom kjørende robot frigjøre arbeidskraft og øke produktiviteten i bransjer som krever rask transport. For å oppnå disse fordelene krever det imidlertid utvikling av pålitelig og nøyaktig programvare og algoritmer som skal implementeres i disse autonome kjøresytemene. Ettersom dette feltet har vokst gjennom årene, har forskjellige selskaper implementert denne teknologien med stor suksess. Dermed gjør det økte fokuset på autonom kjøre teknologi dette til et aktuelt tema å forske på. Siden utvikling av et autonomt kjøresystem er et krevende tema, fokuserer dette prosjektet kun på hvordan kunstig syn kan brukes i autonome kjøresystemer. Først og fremst utvikles en kunstig syns basert programvare for autonom kjøring. Programvaren er først implementert på et lite forhåndslaget kjøretøy i bok størrelse. Dette systemet brukes deretter til å teste programvarens funksjonalitet. Autonome kjørefunksjoner som fungerer tilfredsstillende på det lille test kjøretøyet blir også testet på et større kjøretøy for å se om programvaren fungerer for andre systemer. Videre er den en utviklede programvaren begrenset til enkelte autonome kjørehandlinger. Dette inkluderer handlinger som å stoppe når en hindring eller et stoppskilt er oppdaget, kjøring på en enkel vei og parkering. Selv om dette bare er noen få autonome kjøre funksjoner, er de grunnleggende operasjoner som kan gjøre det autonome kjøresystemet allerede anvendelig for forskjellige brukstilfeller. Ulike kunstig syn metode for gjenstands deteksjon har blitt implementert for å oppdage ulike typer gjenstander som hindringer og skilt for å bestemme kjøretøyets miljø. Programvaren inkluderer også bruk av en linje deteksjonsmetode for å oppdage vei- og parkerings linjer som brukes til å sentrere og parkere kjøretøyet. Dessuten skapes et fuglebilde av den fysiske verden fra kamera bilder som skal brukes som et miljøkart for å planlegge den mest optimale rute i forskjellige scenarier. Til slutt blir disse implementeringene kombinert for å bygge kjørelogikken til kjøretøyet, noe som gjør det i stand til å utføre kjørehandlingene nevnt i forrige avsnitt. Ved bruk av den utviklede programvaren for kjøreoppgave, deteksjon av hindringer, viste resultatet at selv om de faktiske hindringene ble oppdaget, var det scenarier der blokkader ble oppdaget selv om det ikke var noen. På den annen side var den utviklede funksjonen med å stoppe når et stoppskilt blir oppdaget svært nøyaktig og pålitelig ettersom den utførte som forventet. Når det gjelder de resterende to implementerte handlingene, sentrering og parkering av kjøretøyet, slet systemet med å oppnå et lovende resultat. Til tross for det viste de fysiske valideringstestene uten bruk av kjøretøymodell positive resultater, men med mindre avvik fra ønsket resultat. Samlet sett har programvaren potensial for å bli anvendelig i mer krevende scenarier, men det er behov for videre utvikling for å fikse noen problemområder først.Autonomous driving is one of the rising technology in today’s society. Thus, a wide range of applications uses this technology for the benefits it yields. For instance, an autonomous driving robot will free up the labor force and increase productivity in industries that require rapid transportation. However, to gain these benefits, it requires the development of reliable and accurate software and algorithms to be implemented in these autonomous driving systems. As this field has been growing over the years, different companies have implemented this technology with great success. Thus, the increased focus on autonomous driving technology makes this a relevant topic to perform research on. As developing an autonomous driving system is a demanding topic, this project focuses solely on how computer vision can be used in autonomous driving systems. First and foremost, a computer-vision based autonomous driving software is developed. The software is first imple- mented on a small premade book-size vehicle. This system is then used to test the software’s functionality. Autonomous driving functions that perform satisfactorily on the small test vehicle are also tested on a larger vehicle to see if the software works for other systems. Furthermore, the developed software is limited to some autonomous driving actions. This includes actions such as stopping when a hindrance or a stop sign is detected, driving on a simple road, and parking. Although these are only a few autonomous driving actions, they are fundamental operations that can make the autonomous driving system already applicable to different use cases. Different computer vision methods for object detection have been implemented for detecting different types of objects such as hindrances and signs to determine the vehicle’s environment. The software also includes the usage of a line detection method for detecting road and parking lines that are used for centering and parking the vehicle. Moreover, a bird-view of the physical world is created from the camera output to be used as an environment map to plan the most optimal path in different scenarios. Finally, these implementations are combined to build the driving logic of the vehicle, making it able to perform the driving actions mentioned in the previous paragraph. When utilizing the developed software for the driving task, hindrance detection, the result showed that although the actual hindrances were detected, there were scenarios where block- ades were detected even though there were none. On the other hand, the developed function of stopping when a stop sign is detected was highly accurate and reliable as it performed as expected. With regard to the remaining two implemented actions, centering and parking the vehicle, the system struggled to achieve a promising result. Despite that, the physical validation tests without the use of a vehicle model showed positive outcomes although with minor deviation from the desired result. Overall, the software showed potential to be developed even further to be applicable in more demanding scenarios, however, the current issues must be addressed first

    En Introduksjon til Kunstig Syn i Autonom Kjøring

    Get PDF
    Autonomous driving is one of the rising technology in today’s society. Thus, a wide range of applications uses this technology for the benefits it yields. For instance, an autonomous driving robot will free up the labor force and increase productivity in industries that require rapid transportation. However, to gain these benefits, it requires the development of reliable and accurate software and algorithms to be implemented in these autonomous driving systems. As this field has been growing over the years, different companies have implemented this technology with great success. Thus, the increased focus on autonomous driving technology makes this a relevant topic to perform research on. As developing an autonomous driving system is a demanding topic, this project focuses solely on how computer vision can be used in autonomous driving systems. First and foremost, a computer-vision based autonomous driving software is developed. The software is first imple- mented on a small premade book-size vehicle. This system is then used to test the software’s functionality. Autonomous driving functions that perform satisfactorily on the small test vehicle are also tested on a larger vehicle to see if the software works for other systems. Furthermore, the developed software is limited to some autonomous driving actions. This includes actions such as stopping when a hindrance or a stop sign is detected, driving on a simple road, and parking. Although these are only a few autonomous driving actions, they are fundamental operations that can make the autonomous driving system already applicable to different use cases. Different computer vision methods for object detection have been implemented for detecting different types of objects such as hindrances and signs to determine the vehicle’s environment. The software also includes the usage of a line detection method for detecting road and parking lines that are used for centering and parking the vehicle. Moreover, a bird-view of the physical world is created from the camera output to be used as an environment map to plan the most optimal path in different scenarios. Finally, these implementations are combined to build the driving logic of the vehicle, making it able to perform the driving actions mentioned in the previous paragraph. When utilizing the developed software for the driving task, hindrance detection, the result showed that although the actual hindrances were detected, there were scenarios where block- ades were detected even though there were none. On the other hand, the developed function of stopping when a stop sign is detected was highly accurate and reliable as it performed as expected. With regard to the remaining two implemented actions, centering and parking the vehicle, the system struggled to achieve a promising result. Despite that, the physical validation tests without the use of a vehicle model showed positive outcomes although with minor deviation from the desired result. Overall, the software showed potential to be developed even further to be applicable in more demanding scenarios, however, the current issues must be addressed first

    En Introduksjon til Kunstig Syn i Autonom Kjøring

    Get PDF
    Autonom kjøring er en av de fremtredende teknologiene i dagens samfunn. Et bredt spekter av applikasjoner bruker derfor denne teknologien for fordelene den gir. For eksempel vil en autonom kjørende robot frigjøre arbeidskraft og øke produktiviteten i bransjer som krever rask transport. For å oppnå disse fordelene krever det imidlertid utvikling av pålitelig og nøyaktig programvare og algoritmer som skal implementeres i disse autonome kjøresytemene. Ettersom dette feltet har vokst gjennom årene, har forskjellige selskaper implementert denne teknologien med stor suksess. Dermed gjør det økte fokuset på autonom kjøre teknologi dette til et aktuelt tema å forske på. Siden utvikling av et autonomt kjøresystem er et krevende tema, fokuserer dette prosjektet kun på hvordan kunstig syn kan brukes i autonome kjøresystemer. Først og fremst utvikles en kunstig syns basert programvare for autonom kjøring. Programvaren er først implementert på et lite forhåndslaget kjøretøy i bok størrelse. Dette systemet brukes deretter til å teste programvarens funksjonalitet. Autonome kjørefunksjoner som fungerer tilfredsstillende på det lille test kjøretøyet blir også testet på et større kjøretøy for å se om programvaren fungerer for andre systemer. Videre er den en utviklede programvaren begrenset til enkelte autonome kjørehandlinger. Dette inkluderer handlinger som å stoppe når en hindring eller et stoppskilt er oppdaget, kjøring på en enkel vei og parkering. Selv om dette bare er noen få autonome kjøre funksjoner, er de grunnleggende operasjoner som kan gjøre det autonome kjøresystemet allerede anvendelig for forskjellige brukstilfeller. Ulike kunstig syn metode for gjenstands deteksjon har blitt implementert for å oppdage ulike typer gjenstander som hindringer og skilt for å bestemme kjøretøyets miljø. Programvaren inkluderer også bruk av en linje deteksjonsmetode for å oppdage vei- og parkerings linjer som brukes til å sentrere og parkere kjøretøyet. Dessuten skapes et fuglebilde av den fysiske verden fra kamera bilder som skal brukes som et miljøkart for å planlegge den mest optimale rute i forskjellige scenarier. Til slutt blir disse implementeringene kombinert for å bygge kjørelogikken til kjøretøyet, noe som gjør det i stand til å utføre kjørehandlingene nevnt i forrige avsnitt. Ved bruk av den utviklede programvaren for kjøreoppgave, deteksjon av hindringer, viste resultatet at selv om de faktiske hindringene ble oppdaget, var det scenarier der blokkader ble oppdaget selv om det ikke var noen. På den annen side var den utviklede funksjonen med å stoppe når et stoppskilt blir oppdaget svært nøyaktig og pålitelig ettersom den utførte som forventet. Når det gjelder de resterende to implementerte handlingene, sentrering og parkering av kjøretøyet, slet systemet med å oppnå et lovende resultat. Til tross for det viste de fysiske valideringstestene uten bruk av kjøretøymodell positive resultater, men med mindre avvik fra ønsket resultat. Samlet sett har programvaren potensial for å bli anvendelig i mer krevende scenarier, men det er behov for videre utvikling for å fikse noen problemområder først.Autonomous driving is one of the rising technology in today's society. Thus, a wide range of applications uses this technology for the benefits it yields. For instance, an autonomous driving robot will free up the labor force and increase productivity in industries that require rapid transportation. However, to gain these benefits, it requires the development of reliable and accurate software and algorithms to be implemented in these autonomous driving systems. As this field has been growing over the years, different companies have implemented this technology with great success. Thus, the increased focus on autonomous driving technology makes this a relevant topic to perform research on. As developing an autonomous driving system is a demanding topic, this project focuses solely on how computer vision can be used in autonomous driving systems. First and foremost, a computer-vision based autonomous driving software is developed. The software is first implemented on a small premade book-size vehicle. This system is then used to test the software's functionality. Autonomous driving functions that perform satisfactorily on the small test vehicle are also tested on a larger vehicle to see if the software works for other systems. Furthermore, the developed software is limited to some autonomous driving actions. This includes actions such as stopping when a hindrance or a stop sign is detected, driving on a simple road, and parking. Although these are only a few autonomous driving actions, they are fundamental operations that can make the autonomous driving system already applicable to different use cases. Different computer vision methods for object detection have been implemented for detecting different types of objects such as hindrances and signs to determine the vehicle's environment. The software also includes the usage of a line detection method for detecting road and parking lines that are used for centering and parking the vehicle. Moreover, a bird-view of the physical world is created from the camera output to be used as an environment map to plan the most optimal path in different scenarios. Finally, these implementations are combined to build the driving logic of the vehicle, making it able to perform the driving actions mentioned in the previous paragraph. When utilizing the developed software for the driving task, hindrance detection, the result showed that although the actual hindrances were detected, there were scenarios where blockades were detected even though there were none. On the other hand, the developed function of stopping when a stop sign is detected was highly accurate and reliable as it performed as expected. With regard to the remaining two implemented actions, centering and parking the vehicle, the system struggled to achieve a promising result. Despite that, the physical validation tests without the use of a vehicle model showed positive outcomes although with minor deviation from the desired result. Overall, the software showed potential to be developed even further to be applicable in more demanding scenarios, however, the current issues must be addressed first

    Optimizing E-Commerce Product Classification Using Transfer Learning

    Get PDF
    The global e-commerce market is snowballing at a rate of 23% per year. In 2017, retail e-commerce users were 1.66 billion and sales worldwide amounted to 2.3 trillion US dollars, and e-retail revenues are projected to grow to 4.88 trillion USD in 2021. With the immense popularity that e-commerce has gained over past few years comes the responsibility to deliver relevant results to provide rich user experience. In order to do this, it is essential that the products on the ecommerce website be organized correctly into their respective categories. Misclassification of products leads to irrelevant results for users which not just reflects badly on the website, it could also lead to lost customers. With ecommerce sites nowadays providing their portal as a platform for third party merchants to sell their products as well, maintaining a consistency in product categorization becomes difficult. Therefore, automating this process could be of great utilization. This task of automation done on the basis of text could lead to discrepancies since the website itself, its various merchants, and users, all could use different terminologies for a product and its category. Thus, using images becomes a plausible solution for this problem. Dealing with images can best be done using deep learning in the form of convolutional neural networks. This is a computationally expensive task, and in order to keep the accuracy of a traditional convolutional neural network while reducing the hours it takes for the model to train, this project aims at using a technique called transfer learning. Transfer learning refers to sharing the knowledge gained from one task for another where new model does not need to be trained from scratch in order to reduce the time it takes for training. This project aims at using various product images belonging to five categories from an ecommerce platform and developing an algorithm that can accurately classify products in their respective categories while taking as less time as possible. The goal is to first test the performance of transfer learning against traditional convolutional networks. Then the next step is to apply transfer learning to the downloaded dataset and assess its performance on the accuracy and time taken to classify test data that the model has never seen before

    Exploring A Stable Aspen Niche Within Aspen-Conifer Forests of Utah

    Get PDF
    Quaking aspen (Populus tremuloides Michx.) is the most widespread broadleaf tree species of North America. Increasing evidence shows that aspen has diverging ecological roles across its range as both “seral” and “stable” aspen community types. This leads us to believe that the successional pathway of aspen may not always lead to a climax conifer sere, but may in some cases consist of persisting stands of pure aspen. This study is an attempt to understand the relationship of aspen community types to climatic, physical, and biophysical variables by modeling patterns of aspen and conifer distribution using remote sensing and GIS technology. Study methodologies and results were specifically designed to aid land managers in identifying extent and status of aspen populations as well as prioritizing aspen restoration projects. Four study sites were chosen in order to capture the geographic and climatic range of aspen. Photointerpretation of NAIP color infrared imagery and linear unmixing of Landsat Thematic Mapper imagery were used to classify dominant forest cover. A Kappa analysis indicates photointerpretation methods to be more accurate (Khat=92.07%, N=85) than linear unmixing (Khat=51.05%, N=85). At each plot, variables were calculated and derived from DAYMET data, digital elevation models, and soil surveys, then assessed for precision and ability to model aspen and conifer distributions. A generalized linear model and discriminant analysis were used to assess habitat overlap between aspen and conifer and to predict areas where “stable” aspen communities are likely to occur. Results do not provide definitive evidence for a “stable” aspen niche. However, the model indicates 60 to 90 cm of total annual precipitation and topographic positions receiving greater than 4,500 Wh m‐2 d‐1 of solar radiation have a higher potential for “stable” aspen communities. Model predictions were depicted spatially within GIS as probability of conifer encroachment. In addition, prediction‐conditioned fallout rates and receiver operating characteristic curves were used to partition the continuous model output. Categorical maps were then produced for each study site delineating potential “stable” and “seral” aspen community types using an overlay analysis with landcover maps of aspen‐conifer forests
    corecore