1,823 research outputs found

    Approaches for Identifying Consumer Preferences for the Design of Technology Products: A Case Study of Residential Solar Panels

    Get PDF
    This paper investigates ways to obtain consumer preferences for technology products to help designers identify the key attributes that contribute to a product's market success. A case study of residential photovoltaic panels is performed in the context of the California, USA, market within the 2007–2011 time span. First, interviews are conducted with solar panel installers to gain a better understanding of the solar industry. Second, a revealed preference method is implemented using actual market data and technical specifications to extract preferences. The approach is explored with three machine learning methods: Artificial neural networks (ANN), Random Forest decision trees, and Gradient Boosted regression. Finally, a stated preference self-explicated survey is conducted, and the results using the two methods compared. Three common critical attributes are identified from a pool of 34 technical attributes: power warranty, panel efficiency, and time on market. From the survey, additional nontechnical attributes are identified: panel manufacturer's reputation, name recognition, and aesthetics. The work shows that a combination of revealed and stated preference methods may be valuable for identifying both technical and nontechnical attributes to guide design priorities.Center for Scalable and Integrated Nanomanufacturin

    Stronger Baselines for Trustable Results in Neural Machine Translation

    Full text link
    Interest in neural machine translation has grown rapidly as its effectiveness has been demonstrated across language and data scenarios. New research regularly introduces architectural and algorithmic improvements that lead to significant gains over "vanilla" NMT implementations. However, these new techniques are rarely evaluated in the context of previously published techniques, specifically those that are widely used in state-of-theart production and shared-task systems. As a result, it is often difficult to determine whether improvements from research will carry over to systems deployed for real-world use. In this work, we recommend three specific methods that are relatively easy to implement and result in much stronger experimental systems. Beyond reporting significantly higher BLEU scores, we conduct an in-depth analysis of where improvements originate and what inherent weaknesses of basic NMT models are being addressed. We then compare the relative gains afforded by several other techniques proposed in the literature when starting with vanilla systems versus our stronger baselines, showing that experimental conclusions may change depending on the baseline chosen. This indicates that choosing a strong baseline is crucial for reporting reliable experimental results.Comment: To appear at the Workshop on Neural Machine Translation (WNMT

    ARPA Whitepaper

    Get PDF
    We propose a secure computation solution for blockchain networks. The correctness of computation is verifiable even under malicious majority condition using information-theoretic Message Authentication Code (MAC), and the privacy is preserved using Secret-Sharing. With state-of-the-art multiparty computation protocol and a layer2 solution, our privacy-preserving computation guarantees data security on blockchain, cryptographically, while reducing the heavy-lifting computation job to a few nodes. This breakthrough has several implications on the future of decentralized networks. First, secure computation can be used to support Private Smart Contracts, where consensus is reached without exposing the information in the public contract. Second, it enables data to be shared and used in trustless network, without disclosing the raw data during data-at-use, where data ownership and data usage is safely separated. Last but not least, computation and verification processes are separated, which can be perceived as computational sharding, this effectively makes the transaction processing speed linear to the number of participating nodes. Our objective is to deploy our secure computation network as an layer2 solution to any blockchain system. Smart Contracts\cite{smartcontract} will be used as bridge to link the blockchain and computation networks. Additionally, they will be used as verifier to ensure that outsourced computation is completed correctly. In order to achieve this, we first develop a general MPC network with advanced features, such as: 1) Secure Computation, 2) Off-chain Computation, 3) Verifiable Computation, and 4)Support dApps' needs like privacy-preserving data exchange

    Machine Learning

    Get PDF
    Machine Learning can be defined in various ways related to a scientific domain concerned with the design and development of theoretical and implementation tools that allow building systems with some Human Like intelligent behavior. Machine learning addresses more specifically the ability to improve automatically through experience

    Trust Management for Internet of Things: A Systematic Literature Review

    Full text link
    Internet of Things (IoT) is a network of devices that communicate with each other through the internet and provides intelligence to industry and people. These devices are running in potentially hostile environments, so the need for security is critical. Trust Management aims to ensure the reliability of the network by assigning a trust value in every node indicating its trust level. This paper presents an exhaustive survey of the current Trust Management techniques for IoT, a classification based on the methods used in every work and a discussion of the open challenges and future research directions.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Trustworthy Federated Learning: A Survey

    Full text link
    Federated Learning (FL) has emerged as a significant advancement in the field of Artificial Intelligence (AI), enabling collaborative model training across distributed devices while maintaining data privacy. As the importance of FL increases, addressing trustworthiness issues in its various aspects becomes crucial. In this survey, we provide an extensive overview of the current state of Trustworthy FL, exploring existing solutions and well-defined pillars relevant to Trustworthy . Despite the growth in literature on trustworthy centralized Machine Learning (ML)/Deep Learning (DL), further efforts are necessary to identify trustworthiness pillars and evaluation metrics specific to FL models, as well as to develop solutions for computing trustworthiness levels. We propose a taxonomy that encompasses three main pillars: Interpretability, Fairness, and Security & Privacy. Each pillar represents a dimension of trust, further broken down into different notions. Our survey covers trustworthiness challenges at every level in FL settings. We present a comprehensive architecture of Trustworthy FL, addressing the fundamental principles underlying the concept, and offer an in-depth analysis of trust assessment mechanisms. In conclusion, we identify key research challenges related to every aspect of Trustworthy FL and suggest future research directions. This comprehensive survey serves as a valuable resource for researchers and practitioners working on the development and implementation of Trustworthy FL systems, contributing to a more secure and reliable AI landscape.Comment: 45 Pages, 8 Figures, 9 Table

    Attribute Sentiment Scoring With Online Text Reviews : Accounting for Language Structure and Attribute Self-Selection

    Get PDF
    The authors address two novel and significant challenges in using online text reviews to obtain attribute level ratings. First, they introduce the problem of inferring attribute level sentiment from text data to the marketing literature and develop a deep learning model to address it. While extant bag of words based topic models are fairly good at attribute discovery based on frequency of word or phrase occurrences, associating sentiments to attributes requires exploiting the spatial and sequential structure of language. Second, they illustrate how to correct for attribute self-selection—reviewers choose the subset of attributes to write about—in metrics of attribute level restaurant performance. Using Yelp.com reviews for empirical illustration, they find that a hybrid deep learning (CNN-LSTM) model, where CNN and LSTM exploit the spatial and sequential structure of language respectively provide the best performance in accuracy, training speed and training data size requirements. The model does particularly well on the “hard” sentiment classification problems. Further, accounting for attribute self-selection significantly impacts sentiment scores, especially on attributes that are frequently missing

    Trust Management for Context-Aware Composite Services

    Get PDF
    In the areas of cloud computing, big data and internet of things, composite services are designed to effectively address complex levels of user requirements. A major challenge for composite services management is the dynamic and continuously changing run-time environments that could raise several exceptional situations such as service execution time that may have greatly increased or a service that may become unavailable. Composite services in this environmental context have difficulty securing an acceptable quality of service (QoS). The need for dynamic adaptations to be triggered becomes then urgent for service-based systems. These systems also require trust management to ensure service level agreement (SLA) compliance. To face this dynamism and volatility, context-aware composite services (i.e., run-time self-adaptable services) are designed to continue offering their functionalities without compromising their operational efficiency to boost the added value of the composition. The literature on adaptation management for context-aware composite services mainly focuses on the closed world assumption that the boundary between the service and its run-time environment is known, which is impractical for dynamic services in the open world where environmental contexts are unexpected. Besides, the literature relies on centralized architectures that suffer from management overhead or distributed architectures that suffer from communication overhead to manage service adaptation. Moreover, the problem of encountering malicious constituent services at run-time still needs further investigation toward a more efficient solution. Such services take advantage of the environmental contexts for their benefit by providing unsatisfying QoS values or maliciously collaborate with other services. Furthermore, the literature overlooks the fact that composite services data is relational and relies on propositional data (i.e., flattened data containing the information without the structure). This contradicts with the fact that services are statistically dependent since QoS values of service are correlated with those of other services. This thesis aims to address these gaps by capitalizing on different methods from software engineering, computational intelligence and machine learning. To support context-aware composite services in the open world, dynamic adaptation mechanisms are carried out at design-time to guide the running services. To this end, this thesis proposes an adaptation solution based on a feature model that captures the variability of the composite service and deliberates the inter-dependency relations among QoS constraints. We apply the master-slaves adaptation pattern to enable coordination of the self-adaptation process based on the MAPE loop (Monitor-Analysis-Plan-Execute) at run time. We model the adaptation process as a multi-objective optimization problem and solve it using a meta-heuristic search technique constrained by SLA and feature model constraints. This enables the master to resolve conflicting QoS goals of the service adaptation. In the slave side, we propose an adaptation solution that immediately substitutes failed constituent services with no need for complex and costly global adaptation. To support the decision making at different levels of adaptation, we first propose an online SLA violation prediction model that requires small amounts of end-to-end QoS data. We then extend the model to comprehensively consider service dependency that exists in the real business world at run time by leveraging the relational dependency network, thus enhancing the prediction accuracy. In addition, we propose a trust management model for services based on the dependency network. Particularly, we predict the probability of delivering a satisfactory QoS under changing environmental contexts by leveraging the cyclic dependency relations among QoS metrics and environmental context variables. Moreover, we develop a service reputation evaluation technique based on the power of mass collaboration where we explicitly detect collusion attacks. As another contribution of this thesis, we introduce for the newcomer services a trust bootstrapping mechanism resilient to the white-washing attack using the concept of social adoption. The thesis reports simulation results using real datasets showing the efficiency of the proposed solutions
    corecore