79,592 research outputs found

    Meetings and Meeting Modeling in Smart Environments

    Get PDF
    In this paper we survey our research on smart meeting rooms and its relevance for augmented reality meeting support and virtual reality generation of meetings in real time or off-line. The research reported here forms part of the European 5th and 6th framework programme projects multi-modal meeting manager (M4) and augmented multi-party interaction (AMI). Both projects aim at building a smart meeting environment that is able to collect multimodal captures of the activities and discussions in a meeting room, with the aim to use this information as input to tools that allow real-time support, browsing, retrieval and summarization of meetings. Our aim is to research (semantic) representations of what takes place during meetings in order to allow generation, e.g. in virtual reality, of meeting activities (discussions, presentations, voting, etc.). Being able to do so also allows us to look at tools that provide support during a meeting and at tools that allow those not able to be physically present during a meeting to take part in a virtual way. This may lead to situations where the differences between real meeting participants, human-controlled virtual participants and (semi-) autonomous virtual participants disappear

    A cryogenic liquid-mirror telescope on the moon to study the early universe

    Full text link
    We have studied the feasibility and scientific potential of zenith observing liquid mirror telescopes having 20 to 100 m diameters located on the moon. They would carry out deep infrared surveys to study the distant universe and follow up discoveries made with the 6 m James Webb Space Telescope (JWST), with more detailed images and spectroscopic studies. They could detect objects 100 times fainter than JWST, observing the first, high-red shift stars in the early universe and their assembly into galaxies. We explored the scientific opportunities, key technologies and optimum location of such telescopes. We have demonstrated critical technologies. For example, the primary mirror would necessitate a high-reflectivity liquid that does not evaporate in the lunar vacuum and remains liquid at less than 100K: We have made a crucial demonstration by successfully coating an ionic liquid that has negligible vapor pressure. We also successfully experimented with a liquid mirror spinning on a superconducting bearing, as will be needed for the cryogenic, vacuum environment of the telescope. We have investigated issues related to lunar locations, concluding that locations within a few km of a pole are ideal for deep sky cover and long integration times. We have located ridges and crater rims within 0.5 degrees of the North Pole that are illuminated for at least some sun angles during lunar winter, providing power and temperature control. We also have identified potential problems, like lunar dust. Issues raised by our preliminary study demand additional in-depth analyses. These issues must be fully examined as part of a scientific debate we hope to start with the present article.Comment: 35 pages, 11 figures. To appear in Astrophysical Journal June 20 200

    The Gaia Ultra-Cool Dwarf Sample -- II : Structure at the end of the main sequence

    Get PDF
    Ā© 2019 The Author(s) Published by Oxford University Press on behalf of the Royal Astronomical Society.We identify and investigate known late M, L, and T dwarfs in the Gaia second data release. This sample is being used as a training set in the Gaia data processing chain of the ultracool dwarfs work package. We find 695 objects in the optical spectral range M8ā€“T6 with accurate Gaia coordinates, proper motions, and parallaxes which we combine with published spectral types and photometry from large area optical and infrared sky surveys. We find that 100 objects are in 47 multiple systems, of which 27 systems are published and 20 are new. These will be useful benchmark systems and we discuss the requirements to produce a complete catalogue of multiple systems with an ultracool dwarf component. We examine the magnitudes in the Gaia passbands and find that the G BP magnitudes are unreliable and should not be used for these objects. We examine progressively redder colourā€“magnitude diagrams and see a notable increase in the main-sequence scatter and a bivariate main sequence for old and young objects. We provide an absolute magnitude ā€“ spectral subtype calibration for G and G RP passbands along with linear fits over the range M8ā€“L8 for other passbands.Peer reviewedFinal Published versio

    Computationally Efficient Target Classification in Multispectral Image Data with Deep Neural Networks

    Full text link
    Detecting and classifying targets in video streams from surveillance cameras is a cumbersome, error-prone and expensive task. Often, the incurred costs are prohibitive for real-time monitoring. This leads to data being stored locally or transmitted to a central storage site for post-incident examination. The required communication links and archiving of the video data are still expensive and this setup excludes preemptive actions to respond to imminent threats. An effective way to overcome these limitations is to build a smart camera that transmits alerts when relevant video sequences are detected. Deep neural networks (DNNs) have come to outperform humans in visual classifications tasks. The concept of DNNs and Convolutional Networks (ConvNets) can easily be extended to make use of higher-dimensional input data such as multispectral data. We explore this opportunity in terms of achievable accuracy and required computational effort. To analyze the precision of DNNs for scene labeling in an urban surveillance scenario we have created a dataset with 8 classes obtained in a field experiment. We combine an RGB camera with a 25-channel VIS-NIR snapshot sensor to assess the potential of multispectral image data for target classification. We evaluate several new DNNs, showing that the spectral information fused together with the RGB frames can be used to improve the accuracy of the system or to achieve similar accuracy with a 3x smaller computation effort. We achieve a very high per-pixel accuracy of 99.1%. Even for scarcely occurring, but particularly interesting classes, such as cars, 75% of the pixels are labeled correctly with errors occurring only around the border of the objects. This high accuracy was obtained with a training set of only 30 labeled images, paving the way for fast adaptation to various application scenarios.Comment: Presented at SPIE Security + Defence 2016 Proc. SPIE 9997, Target and Background Signatures I

    Evaluation of optimisation techniques for multiscopic rendering

    Get PDF
    A thesis submitted to the University of Bedfordshire in fulfilment of the requirements for the degree of Master of Science by ResearchThis project evaluates different performance optimisation techniques applied to stereoscopic and multiscopic rendering for interactive applications. The artefact features a robust plug-in package for the Unity game engine. The thesis provides background information for the performance optimisations, outlines all the findings, evaluates the optimisations and provides suggestions for future work. Scrum development methodology is used to develop the artefact and quantitative research methodology is used to evaluate the findings by measuring performance. This project concludes that the use of each performance optimisation has specific use case scenarios in which performance benefits. Foveated rendering provides greatest performance increase for both stereoscopic and multiscopic rendering but is also more computationally intensive as it requires an eye tracking solution. Dynamic resolution is very beneficial when overall frame rate smoothness is needed and frame drops are present. Depth optimisation is beneficial for vast open environments but can lead to decreased performance if used inappropriately

    New nearby white dwarfs from Gaia DR1 TGAS and UCAC5/URAT

    Full text link
    Using an accurate Gaia TGAS 25pc sample, nearly complete for GK stars, and selecting common proper motion (CPM) candidates from UCAC5, we search for new white dwarf (WD) companions around nearby stars with relatively small proper motions. For investigating known CPM systems in TGAS and for selecting CPM candidates in TGAS+UCAC5, we took into account the expected effect of orbital motion on the proper motion as well as the proper motion catalogue errors. Colour-magnitude diagrams (CMDs) MJ/Jāˆ’KsM_J/J-K_s and MG/Gāˆ’JM_G/G-J were used to verify CPM candidates from UCAC5. Assuming their common distance with a given TGAS star, we searched for candidates that occupied similar regions in the CMDs as the few known nearby WDs (4 in TGAS) and WD companions (3 in TGAS+UCAC5). CPM candidates with colours and absolute magnitudes corresponding neither to the main sequence nor to the WD sequence were considered as doubtful or subdwarf candidates. With a minimum proper motion of 60mas/yr, we selected three WD companion candidates, two of which are also confirmed by their significant parallaxes measured in URAT data, whereas the third may also be a chance alignment of a distant halo star with a nearby TGAS star (angular separation of about 465arcsec). One additional nearby WD candidate was found from its URAT parallax and GJKsGJK_s photometry. With HD 166435 B orbiting a well-known G1 star at ~24.6pc with a projected physical separation of ~700AU, we discovered one of the hottest WDs, classified by us as DA2.0Ā±\pm0.2, in the solar neighbourhood. We also found TYC 3980-1081-1 B, a strong cool WD companion candidate around a recently identified new solar neighbour with a TGAS parallax corresponding to a distance of ~8.3pc and our photometric classification as ~M2 dwarf. This raises the question whether previous assumptions on the completeness of the WD sample to a distance of 13pc were correct.Comment: 9 pages, 6 figures, accepted for publication in Astronomy and Astrophysic

    The First Three Rungs of the Cosmological Distance Ladder

    Get PDF
    It is straightforward to determine the size of the Earth and the distance to the Moon without making use of a telescope. The methods have been known since the 3rd century BC. However, few amateur or professional astronomers have worked this out from data they themselves have taken. Here we use a gnomon to determine the latitude and longitude of South Bend, Indiana, and College Station, Texas, and determine a value of the radius of the Earth of 6290 km, only 1.4 percent smaller than the true value. We use the method of Aristarchus and the size of the Earth's shadow during the lunar eclipse of 2011 June 15 to derive an estimate of the distance to the Moon (62.3 R_Earth), some 3.3 percent greater than the true mean value. We use measurements of the angular motion of the Moon against the background stars over the course of two nights, using a simple cross staff device, to estimate the Moon's distance at perigee and apogee. Finally, we use simultaneous CCD observations of asteroid 1996 HW1 obtained with small telescopes in Socorro, New Mexico, and Ojai, California, to derive a value of the Astronomical Unit of (1.59 +/- 0.19) X 10^8 km, about 6 percent too large. The data and methods presented here can easily become part of a beginning astronomy lab class.Comment: 34 pages, 11 figures, accepted for publication in American Journal of Physic

    Learning Visual Clothing Style with Heterogeneous Dyadic Co-occurrences

    Full text link
    With the rapid proliferation of smart mobile devices, users now take millions of photos every day. These include large numbers of clothing and accessory images. We would like to answer questions like `What outfit goes well with this pair of shoes?' To answer these types of questions, one has to go beyond learning visual similarity and learn a visual notion of compatibility across categories. In this paper, we propose a novel learning framework to help answer these types of questions. The main idea of this framework is to learn a feature transformation from images of items into a latent space that expresses compatibility. For the feature transformation, we use a Siamese Convolutional Neural Network (CNN) architecture, where training examples are pairs of items that are either compatible or incompatible. We model compatibility based on co-occurrence in large-scale user behavior data; in particular co-purchase data from Amazon.com. To learn cross-category fit, we introduce a strategic method to sample training data, where pairs of items are heterogeneous dyads, i.e., the two elements of a pair belong to different high-level categories. While this approach is applicable to a wide variety of settings, we focus on the representative problem of learning compatible clothing style. Our results indicate that the proposed framework is capable of learning semantic information about visual style and is able to generate outfits of clothes, with items from different categories, that go well together.Comment: ICCV 201
    • ā€¦
    corecore