66,997 research outputs found
CHI 2 0 0 0 • 1-6 APRIL 2 0 0 0 I n t e r a c t i v e Posters Achieving Usability Through Software Architectural Styles
ABSTRACT Design decisions at the architecture level can have farreaching effects on the qualities of a computer system. Recent developments in software engineering link architectural styles to quality attribute analysis techniques to predict the effects of architectural design decisions on the eventual manifestation of quality. An Attribute-Based Architecture Style (ABAS) is a structured description of a particular software quality attribute, a particular architectural style, and the relevant qualitative and quantitative analysis techniques. Thus, it is a description that is meaningful to software engineers as they design or analyze proposed software architectures. We are producing a collection of ABASs that speak to the usability quality attribute. These ABASs will enable software engineers make early architectural design decisions that achieve specific usability functions
Evaluating Software Architectures: Development Stability and Evolution
We survey seminal work on software architecture evaluationmethods. We then look at an emerging class of methodsthat explicates evaluating software architectures forstability and evolution. We define architectural stabilityand formulate the problem of evaluating software architecturesfor stability and evolution. We draw the attention onthe use of Architectures Description Languages (ADLs) forsupporting the evaluation of software architectures in generaland for architectural stability in specific
Enhancing Perceptual Attributes with Bayesian Style Generation
Deep learning has brought an unprecedented progress in computer vision and
significant advances have been made in predicting subjective properties
inherent to visual data (e.g., memorability, aesthetic quality, evoked
emotions, etc.). Recently, some research works have even proposed deep learning
approaches to modify images such as to appropriately alter these properties.
Following this research line, this paper introduces a novel deep learning
framework for synthesizing images in order to enhance a predefined perceptual
attribute. Our approach takes as input a natural image and exploits recent
models for deep style transfer and generative adversarial networks to change
its style in order to modify a specific high-level attribute. Differently from
previous works focusing on enhancing a specific property of a visual content,
we propose a general framework and demonstrate its effectiveness in two use
cases, i.e. increasing image memorability and generating scary pictures. We
evaluate the proposed approach on publicly available benchmarks, demonstrating
its advantages over state of the art methods.Comment: ACCV-201
Semi-supervised FusedGAN for Conditional Image Generation
We present FusedGAN, a deep network for conditional image synthesis with
controllable sampling of diverse images. Fidelity, diversity and controllable
sampling are the main quality measures of a good image generation model. Most
existing models are insufficient in all three aspects. The FusedGAN can perform
controllable sampling of diverse images with very high fidelity. We argue that
controllability can be achieved by disentangling the generation process into
various stages. In contrast to stacked GANs, where multiple stages of GANs are
trained separately with full supervision of labeled intermediate images, the
FusedGAN has a single stage pipeline with a built-in stacking of GANs. Unlike
existing methods, which requires full supervision with paired conditions and
images, the FusedGAN can effectively leverage more abundant images without
corresponding conditions in training, to produce more diverse samples with high
fidelity. We achieve this by fusing two generators: one for unconditional image
generation, and the other for conditional image generation, where the two
partly share a common latent space thereby disentangling the generation. We
demonstrate the efficacy of the FusedGAN in fine grained image generation tasks
such as text-to-image, and attribute-to-face generation
A Style-Based Generator Architecture for Generative Adversarial Networks
We propose an alternative generator architecture for generative adversarial
networks, borrowing from style transfer literature. The new architecture leads
to an automatically learned, unsupervised separation of high-level attributes
(e.g., pose and identity when trained on human faces) and stochastic variation
in the generated images (e.g., freckles, hair), and it enables intuitive,
scale-specific control of the synthesis. The new generator improves the
state-of-the-art in terms of traditional distribution quality metrics, leads to
demonstrably better interpolation properties, and also better disentangles the
latent factors of variation. To quantify interpolation quality and
disentanglement, we propose two new, automated methods that are applicable to
any generator architecture. Finally, we introduce a new, highly varied and
high-quality dataset of human faces.Comment: CVPR 2019 final versio
- …