878 research outputs found

    Activity Report: Automatic Control 2012

    Get PDF

    Performance and Reliability Analysis of Cross-Layer Optimizations of NAND Flash Controllers

    Get PDF
    NAND flash memories are becoming the predominant technology in the implementation of mass storage systems for both embedded and high-performance applications. However, when considering data and code storage in non-volatile memories (NVMs), such as NAND flash memories, reliability and performance be- come a serious concern for systems' designer. Designing NAND flash based systems based on worst-case scenarios leads to waste of resources in terms of performance, power consumption, and storage capacity. This is clearly in contrast with the request for run-time reconfigurability, adaptivity, and resource optimiza- tion in nowadays computing systems. There is a clear trend toward supporting differentiated access modes in flash memory controllers, each one setting a differentiated trade-off point in the performance-reliability optimization space. This is supported by the possibility of tuning the NAND flash memory performance, reli- ability and power consumption acting on several tuning knobs such as the flash programming algorithm and the flash error correcting code. However, to successfully exploit these degrees of freedom, it is mandatory to clearly understand the effect the combined tuning of these parameters have on the full NVM sub-system. This paper performs a comprehensive quantitative analysis of the benefits provided by the run-time reconfigurability of an MLC NAND flash controller through the combined effect of an adaptable memory programming circuitry coupled with run-time adaptation of the ECC correction capability. The full non- volatile memory (NVM) sub-system is taken into account, starting from the characterization of the low level circuitry to the effect of the adaptation on a wide set of realistic benchmarks in order to provide the readers a clear figure of the benefit this combined adaptation would provide at the system leve

    Digital transformation in the manufacturing industry : business models and smart service systems

    Get PDF
    The digital transformation enables innovative business models and smart services, i.e. individual services that are based on data analyses in real-time as well as information and communications technology. Smart services are not only a theoretical construct but are also highly relevant in practice. Nine research questions are answered, all related to aspects of smart services and corresponding business models. The dissertation proceeds from a general overview, over the topic of installed base management as precondition for many smart services in the manufacturing industry, towards exemplary applications in form of predictive maintenance activities. A comprehensive overview is provided about smart service research and research gaps are presented that are not yet closed. It is shown how a business model can be developed in practice. A closer look is taken on installed base management. Installed base data combined with condition monitoring data leads to digital twins, i.e. dynamic models of machines including all components, their current conditions, applications and interaction with the environment. Design principles for an information architecture for installed base management and its application within a use case in the manufacturing industry indicate how digital twins can be structured. In this context, predictive maintenance services are taken for the purpose of concretization. It is looked at state oriented maintenance planning and optimized spare parts inventory as exemplary approaches for smart services that contribute to high machine availability. Taxonomy of predictive maintenance business models shows their diversity. It is viewed on the named topics both from theoretical and practical viewpoints, focusing on the manufacturing industry. Established research methods are used to ensure academic rigor. Practical problems are considered to guarantee practical relevance. A research project as background and the resulting collaboration with different experts from several companies also contribute to that. The dissertation provides a comprehensive overview of smart service topics and innovative business models for the manufacturing industry, enabled by the digital transformation. It contributes to a better understanding of smart services in theory and practice and emphasizes the importance of innovative business models in the manufacturing industry

    Ecology-based planning. Italian and French experimentations

    Get PDF
    This paper examines some French and Italian experimentations of green infrastructures’ (GI) construction in relation to their techniques and methodologies. The construction of a multifunctional green infrastructure can lead to the generation of a number of relevant bene fi ts able to face the increasing challenges of climate change and resilience (for example, social, ecological and environmental through the recognition of the concept of ecosystem services) and could ease the achievement of a performance-based approach. This approach, differently from the traditional prescriptive one, helps to attain a better and more fl exible land-use integration. In both countries, GI play an important role in contrasting land take and, for their adaptive and cross-scale nature, they help to generate a res ilient approach to urban plans and projects. Due to their fl exible and site-based nature, GI can be adapted, even if through different methodologies and approaches, both to urban and extra-urban contexts. On one hand, France, through its strong national policy on ecological networks, recognizes them as one of the major planning strategies toward a more sustainable development of territories; on the other hand, Italy has no national policy and Regions still have a hard time integrating them in already existing planning tools. In this perspective, Italian experimentations on GI construction appear to be a simple and sporadic add-on of urban and regional plans

    EMPIRICAL CHARACTERIZATION OF SOFTWARE QUALITY

    Get PDF
    The research topic focuses on the characterization of software quality considering the main software elements such as people, process and product. Many attributes (size, language, testing techniques etc.) probably could have an effect on the quality of software. In this thesis we aim to understand the impact of attributes of three P’s (people, product, process) on the quality of software by empirical means. Software quality can be interpreted in many ways, such as customer satisfaction, stability and defects etc. In this thesis we adopt ‘defect density’ as a quality measure. Therefore the research focus on the empirical evidences of the impact of attributes of the three P’s on the software defect density. For this reason empirical research methods (systematic literature reviews, case studies, and interviews) are utilized to collect empirical evidence. Each of this research method helps to extract the empirical evidences of the object under study and for data analysis statistical methods are used. Considering the product attributes, we have studied the size, language, development mode, age, complexity, module structure, module dependency, and module quality and their impact on project quality. Considering the process attributes, we have studied the process maturity and structure, and their impact on the project quality. Considering the people attributes, we have studied the experience and capability, and their impact on the project quality. Moreover, in the process category, we have studied the impact of one testing approach called ‘exploratory testing’ and its impact on the quality of software. Exploratory testing is a widely used software-testing practice and means simultaneous learning, test design, and test execution. We have analyzed the exploratory testing weaknesses, and proposed a hybrid testing approach in an attempt to improve the quality. Concerning the product attributes, we found that there exist a significant difference of quality between open and close source projects, java and C projects, and large and small projects. Very small and defect free modules have impact on the software quality. Different complexity metrics have different impact on the software quality considering the size. Product complexity as defined in Table 53 has partial impact on the software quality. However software age and module dependencies are not factor to characterize the software quality. Concerning the people attributes, we found that platform experience, application experience and language and tool experience have significant impact on the software quality. Regarding the capability we found that programmer capability has partial impact on the software quality where as analyst capability has no impact on the software quality. Concerning process attributes we found that there is no difference of quality between the project developed under CMMI and those that are not developed under CMMI. Regarding the CMMI levels there is difference of software quality particularly between CMMI level 1 and CMMI level 3. Comparing different process types we found that hybrid projects are of better quality than waterfall projects. Process maturity defined by (SEI-CMM) has partial impact on the software quality. Concerning exploratory testing, we found that exploratory testing weaknesses induce the testing technical debt therefore a process is defined in conjunction with the scripted testing in an attempt to reduce the associated technical debt of exploratory testing. The findings are useful for both researchers and practitioners to evaluate their project

    Environmental and territorial modelling for planning and design

    Get PDF
    [English]: Between 5th and 8th September 2018 the tenth edition of the INPUT conference took place in Viterbo, guests of the beautiful setting of the University of Tuscia and its DAFNE Department. INPUT is managed by an informal group of Italian academic researchers working in many fields related to the exploitation of informatics in planning. This Tenth Edition pursed multiple objectives with a holistic, boundary-less character, to face the complexity of today socio-ecological systems following a systemic approach aimed to problem solving. In particular, the Conference will aim to present the state of art of modeling approaches employed in urban and territorial planning in national and international contexts. Moreover, the conference has hosted a Geodesign workshop, by Carl Steinitz (Harvard Graduate School of Design) and Hrishi Ballal (on skype), Tess Canfield, Michele Campagna. Finally, on the last day of the conference, took place the QGIS hackfest, in which over 20 free software developers from all over Italy discussed the latest news and updates from the QGIS network. The acronym INPUT was born as INformatics for Urban and Regional Planning. In the transition to graphics, unintentionally, the first term was transformed into “Innovation”, with a fine example of serendipity, in which a small mistake turns into something new and intriguing. The opportunity is taken to propose to the organizers and the scientific committee of the next appointment to formalize this change of the acronym. This 10th edition was focused on Environmental and Territorial Modeling for planning and design. It has been considered a fundamental theme, especially in relation to the issue of environmental sustainability, which requires a rigorous and in-depth analysis of processes, a theme which can be satisfied by the territorial information systems and, above all, by modeling simulation of processes. In this topic, models are useful with the managerial approach, to highlight the many aspects of complex city and landscape systems. In consequence, their use must be deeply critical, not for rigid forecasts, but as an aid to the management decisions of complex systems.[Italiano]:Dal 5 all’8 settembre 2018 l’Università della Tuscia e il Dipartimento di Scienze Agrarie e Forestali - DAFNE hanno ospitato la decima edizione del Congresso Internazionale INPUT. INPUT è un gruppo informale di ricercatori accademici italiani che operano in molti settori connessi all’uso dell’informatica nella pianificazione. Questa decima edizione del Congresso ha perseguito obiettivi multipli con un carattere olistico, senza confini, per affrontare la complessità degli attuali sistemi socio-ecologici seguendo un approccio sistemico finalizzato alla risoluzione dei problemi. In particolare, la conferenza è stata orientata a presentare lo stato dell'arte degli approcci di modellazione impiegati nella pianificazione urbana e territoriale in contesti nazionali e internazionali. Inoltre, la conferenza ha ospitato un seminario di Geodesign, di Carl Steinitz (Harvard Graduate School of Design) e Hrishi Ballal (via skype), Tess Canfield e Michele Campagna. Infine, l'ultimo giorno della conferenza, si è svolto l’hackfest di QGIS, in cui oltre 20 sviluppatori di software open source provenienti da tutta Italia hanno discusso le ultime novità e gli aggiornamenti dalla rete QGIS. L'acronimo “INPUT” è nato come “INformatics per Urban and Regional Planning”. Nella transizione alla grafica, involontariamente, il primo termine è stato trasformato in "Innovazione", con un bell'esempio di serendipità, in cui un piccolo errore si trasforma in qualcosa di nuovo e intrigante

    6G Vision, Value, Use Cases and Technologies from European 6G Flagship Project Hexa-X

    Get PDF
    While 5G is being deployed and the economy and society begin to reap the associated benefits, the research and development community starts to focus on the next, 6th Generation (6G) of wireless communications. Although there are papers available in the literature on visions, requirements and technical enablers for 6G from various academic perspectives, there is a lack of joint industry and academic work towards 6G. In this paper a consolidated view on vision, values, use cases and key enabling technologies from leading industry stakeholders and academia is presented. The authors represent the mobile communications ecosystem with competences spanning hardware, link layer and networking aspects, as well as standardization and regulation. The second contribution of the paper is revisiting and analyzing the key concurrent initiatives on 6G. A third contribution of the paper is the identification and justification of six key 6G research challenges: (i) “connecting”, in the sense of empowering, exploiting and governing, intelligence; (ii) realizing a network of networks, i.e., leveraging on existing networks and investments, while reinventing roles and protocols where needed; (iii) delivering extreme experiences, when/where needed; (iv) (environmental, economic, social) sustainability to address the major challenges of current societies; (v) trustworthiness as an ingrained fundamental design principle; (vi) supporting cost-effective global service coverage. A fourth contribution is a comprehensive specification of a concrete first-set of industry and academia jointly defined use cases for 6G, e.g., massive twinning, cooperative robots, immersive telepresence, and others. Finally, the anticipated evolutions in the radio, network and management/orchestration domains are discussed

    Playing Planning Poker in Crowds: Human Computation of Software Effort Estimates

    Get PDF
    Reliable cost effective effort estimation remains a considerable challenge for software projects. Recent work has demonstrated that the popular Planning Poker practice can produce reliable estimates when undertaken within a software team of knowledgeable domain experts. However, the process depends on the availability of experts and can be time-consuming to perform, making it impractical for large scale or open source projects that may curate many thousands of outstanding tasks. This paper reports on a full study to investigate the feasibility of using crowd workers supplied with limited information about a task to provide comparably accurate estimates using Planning Poker. We describe the design of a Crowd Planning Poker (CPP) process implemented on Amazon Mechanical Turk and the results of a substantial set of trials, involving more than 5000 crowd workers and 39 diverse software tasks. Our results show that a carefully organised and selected crowd of workers can produce effort estimates that are of similar accuracy to those of a single expert

    Data driven techniques for on-board performance estimation and prediction in vehicular applications.

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Modeling User-Affected Software Properties for Open Source Software Supply Chains

    Get PDF
    Background: Open Source Software development community relies heavily on users of the software and contributors outside of the core developers to produce top-quality software and provide long-term support. However, the relationship between a software and its contributors in terms of exactly how they are related through dependencies and how the users of a software affect many of its properties are not very well understood. Aim: My research covers a number of aspects related to answering the overarching question of modeling the software properties affected by users and the supply chain structure of software ecosystems, viz. 1) Understanding how software usage affect its perceived quality; 2) Estimating the effects of indirect usage (e.g. dependent packages) on software popularity; 3) Investigating the patch submission and issue creation patterns of external contributors; 4) Examining how the patch acceptance probability is related to the contributors\u27 characteristics. 5) A related topic, the identification of bots that commit code, aimed at improving the accuracy of these and other similar studies was also investigated. Methodology: Most of the Research Questions are addressed by studying the NPM ecosystem, with data from various sources like the World of Code, GHTorrent, and the GiHub API. Different supervised and unsupervised machine learning models, including Regression, Random Forest, Bayesian Networks, and clustering, were used to answer appropriate questions. Results: 1) Software usage affects its perceived quality even after accounting for code complexity measures. 2) The number of dependents and dependencies of a software were observed to be able to predict the change in its popularity with good accuracy. 3) Users interact (contribute issues or patches) primarily with their direct dependencies, and rarely with transitive dependencies. 4) A user\u27s earlier interaction with the repository to which they are contributing a patch, and their familiarity with related topics were important predictors impacting the chance of a pull request getting accepted. 5) Developed BIMAN, a systematic methodology for identifying bots. Conclusion: Different aspects of how users and their characteristics affect different software properties were analyzed, which should lead to a better understanding of the complex interaction between software developers and users/ contributors
    corecore