24,071 research outputs found
Structure controllability of complex network based on preferential matching
Minimum driver node sets (MDSs) play an important role in studying the
structural controllability of complex networks. Recent research has shown that
MDSs tend to avoid high-degree nodes. However, this observation is based on the
analysis of a small number of MDSs, because enumerating all of the MDSs of a
network is a #P problem. Therefore, past research has not been sufficient to
arrive at a convincing conclusion. In this paper, first, we propose a
preferential matching algorithm to find MDSs that have a specific degree
property. Then, we show that the MDSs obtained by preferential matching can be
composed of high- and medium-degree nodes. Moreover, the experimental results
also show that the average degree of the MDSs of some networks tends to be
greater than that of the overall network, even when the MDSs are obtained using
previous research method. Further analysis shows that whether the driver nodes
tend to be high-degree nodes or not is closely related to the edge direction of
the network
Adversarially Tuned Scene Generation
Generalization performance of trained computer vision systems that use
computer graphics (CG) generated data is not yet effective due to the concept
of 'domain-shift' between virtual and real data. Although simulated data
augmented with a few real world samples has been shown to mitigate domain shift
and improve transferability of trained models, guiding or bootstrapping the
virtual data generation with the distributions learnt from target real world
domain is desired, especially in the fields where annotating even few real
images is laborious (such as semantic labeling, and intrinsic images etc.). In
order to address this problem in an unsupervised manner, our work combines
recent advances in CG (which aims to generate stochastic scene layouts coupled
with large collections of 3D object models) and generative adversarial training
(which aims train generative models by measuring discrepancy between generated
and real data in terms of their separability in the space of a deep
discriminatively-trained classifier). Our method uses iterative estimation of
the posterior density of prior distributions for a generative graphical model.
This is done within a rejection sampling framework. Initially, we assume
uniform distributions as priors on the parameters of a scene described by a
generative graphical model. As iterations proceed the prior distributions get
updated to distributions that are closer to the (unknown) distributions of
target data. We demonstrate the utility of adversarially tuned scene generation
on two real-world benchmark datasets (CityScapes and CamVid) for traffic scene
semantic labeling with a deep convolutional net (DeepLab). We realized
performance improvements by 2.28 and 3.14 points (using the IoU metric) between
the DeepLab models trained on simulated sets prepared from the scene generation
models before and after tuning to CityScapes and CamVid respectively.Comment: 9 pages, accepted at CVPR 201
Natural Notation for the Domestic Internet of Things
This study explores the use of natural language to give instructions that
might be interpreted by Internet of Things (IoT) devices in a domestic `smart
home' environment. We start from the proposition that reminders can be
considered as a type of end-user programming, in which the executed actions
might be performed either by an automated agent or by the author of the
reminder. We conducted an experiment in which people wrote sticky notes
specifying future actions in their home. In different conditions, these notes
were addressed to themselves, to others, or to a computer agent.We analyse the
linguistic features and strategies that are used to achieve these tasks,
including the use of graphical resources as an informal visual language. The
findings provide a basis for design guidance related to end-user development
for the Internet of Things.Comment: Proceedings of the 5th International symposium on End-User
Development (IS-EUD), Madrid, Spain, May, 201
Recommended from our members
Augmenting the field experience: a student-led comparison of techniques and technologies
In this study we report on our experiences of creating and running a student fieldtrip exercise which allowed students to compare a range of approaches to the design of technologies for augmenting landscape scenes. The main study site is around Keswick in the English Lake District, Cumbria, UK, an attractive upland environment popular with tourists and walkers. The aim of the exercise for the students was to assess the effectiveness of various forms of geographic information in augmenting real landscape scenes, as mediated through a range of techniques and technologies. These techniques were: computer-generated acetate overlays showing annotated wireframe views from certain key points; a custom-designed application running on a PDA; a mediascape running on the mScape software on a GPS-enabled mobile phone; Google Earth on a tablet PC; and a head-mounted in-field Virtual Reality system. Each group of students had all five techniques available to them, and were tasked with comparing them in the context of creating a visitor guide to the area centred on the field centre. Here we summarise their findings and reflect upon some of the broader research questions emerging from the project
Simplifying the Development, Use and Sustainability of HPC Software
Developing software to undertake complex, compute-intensive scientific
processes requires a challenging combination of both specialist domain
knowledge and software development skills to convert this knowledge into
efficient code. As computational platforms become increasingly heterogeneous
and newer types of platform such as Infrastructure-as-a-Service (IaaS) cloud
computing become more widely accepted for HPC computations, scientists require
more support from computer scientists and resource providers to develop
efficient code and make optimal use of the resources available to them. As part
of the libhpc stage 1 and 2 projects we are developing a framework to provide a
richer means of job specification and efficient execution of complex scientific
software on heterogeneous infrastructure. The use of such frameworks has
implications for the sustainability of scientific software. In this paper we
set out our developing understanding of these challenges based on work carried
out in the libhpc project.Comment: 4 page position paper, submission to WSSSPE13 worksho
Recommended from our members
Education in the Wild: Contextual and Location-Based Mobile Learning in Action. A Report from the STELLAR Alpine Rendez-Vous Workshop Series
Smart Photos
Recent technological leaps have been a great catalyst for changing how people interact with the world around us. Specifically, the field of Augmented Reality has led to many software and hardware advances that have formed a digital intermediary between humans and their environment. As of now, Augmented Reality is available to the select few with the means of obtaining Google Glass, Oculus Rifts, and other relatively expensive platforms. Be that as it may, the tech industry\u27s current goal has been integration of this technology into the public\u27s smartphones and everyday devices. One inhibitor of this goal is the difficulty of finding an Augmented Reality application whose usage could satisfy an everyday need or attraction. Augmented reality presents our world in a unique perspective that can be found nowhere else in the natural world. However, visual impact is weak without substance or meaning. The best technology is invisible, and what makes a good product is its ability to fill a void in a person\u27s life. The most important researchers in this field are those who have been augmenting the tasks that most would consider mundane, such as overlaying nutritional information directly onto a meal [4].
In the same vein, we hope to incorporate Augmented Reality into everyday life by unlocking the full potential of a technology often believed to have already have reached its peak. The humble photograph, a classic invention and unwavering enhancement to the human experience, captures moments in space and time and compresses them into a single permanent state. These two-dimensional assortments of pixels give us a physical representation of the memories we form in specific periods of our lives. We believe this representation can be further enhanced in what we like to call a Smart Photo. The idea behind a Smart Photo is to unlock the full potential in the way that people can interact with photographs. This same notion is explored in the field of Virtual Reality with inventions such as 3D movies, which provide a special appeal that ordinary 2D films cannot. The 3D technology places the viewer inside the film\u27s environment. We intend to marry this seemingly mutually exclusive dichotomy by processing 2D photos alongside their 3D counterparts
- …