1,436 research outputs found

    Challenges and Barriers of Using Low Code Software for Machine Learning

    Full text link
    As big data grows ubiquitous across many domains, more and more stakeholders seek to develop Machine Learning (ML) applications on their data. The success of an ML application usually depends on the close collaboration of ML experts and domain experts. However, the shortage of ML engineers remains a fundamental problem. Low-code Machine learning tools/platforms (aka, AutoML) aim to democratize ML development to domain experts by automating many repetitive tasks in the ML pipeline. This research presents an empirical study of around 14k posts (questions + accepted answers) from Stack Overflow (SO) that contained AutoML-related discussions. We examine how these topics are spread across the various Machine Learning Life Cycle (MLLC) phases and their popularity and difficulty. This study offers several interesting findings. First, we find 13 AutoML topics that we group into four categories. The MLOps topic category (43% questions) is the largest, followed by Model (28% questions), Data (27% questions), Documentation (2% questions). Second, Most questions are asked during Model training (29%) (i.e., implementation phase) and Data preparation (25%) MLLC phase. Third, AutoML practitioners find the MLOps topic category most challenging, especially topics related to model deployment & monitoring and Automated ML pipeline. These findings have implications for all three AutoML stakeholders: AutoML researchers, AutoML service vendors, and AutoML developers. Academia and Industry collaboration can improve different aspects of AutoML, such as better DevOps/deployment support and tutorial-based documentation

    V3CMM: a 3-view component meta-model for model-driven robotic software development

    Get PDF
    There are many voices in the robotics community demanding a qualitative improvement in the robotics software development process and tools, in order to increase product flexibility, adaptability, and overall quality, while reducing its cost and time-to-market. This article describes a first step towards a model-driven approach to robotics software development, based on the definition of highly reusable and platform-independent component-based design models. The proposed approach revolves around the V3CMM modeling language and the definition of different model transformations for deriving both special purpose models (e.g., models suited for analysis or simulation purposes) and lower-level design models, in which platform-specific and application-dependent details can be progressively included. The article describes the tool-chain implemented to support the different stages of the proposed MDE process, including (1) the definition of component-based architectural models, defined using the V3CMM platform-independent modeling language, (2) the automatic transformation of the V3CMM component-based models into equivalent object-oriented designs, described in terms of the UML standard, and (3) the transformation of the UML models into an the Ada 2005 object-oriented programming language. In order to show the feasibility and the benefits of the proposal, a simple (yet complete) case study regarding the design of a Cartesian robot is presented.This research has been funded by the Spanish CICYT Project EXPLORE (ref. TIN2009-08572), the FundaciĂłn SĂ©neca Regional Project COMPAS-R (ref. 11994/PI/09), and the Spanish Research Network on Model-Driven Software Development (ref. TIN2008-00889-E)

    Advances of digital twins for predictive maintenance

    Get PDF
    Digital twins (DT), aiming to improve the performance of physical entities by leveraging the virtual replica, have gained significant growth in recent years. Meanwhile, DT technology has been explored in different industrial sectors and on a variety of topics, e.g., predictive maintenance (PdM). In order to understand the state-of-the-art of DT in PdM, this paper focuses on the recent advances of how DT has been deployed in PdM, especially on the challenges faced and the opportunities identified. Based on the relevant research efforts recognised, we classify them into three main branches: 1) the frameworks reported for application, 2) modelling methods, and 3) interaction between the physical entity and virtual replica. We intend to analyse the techniques and applications regarding each category, and the perceived benefits of PdM from the DT paradigm are summarized. Finally, challenges of current research and opportunities for future research are discussed especially concerning the issue of framework standardisation for DT-driven PdM, needs for high-fidelity models, holistic evaluation methods, and the multi-component, multi-level model issue

    EristÀmismekanismeja selainpohjaisille ohjelmistoarkkitehtuureille

    Get PDF
    Traditional backend-oriented web applications are increasingly being replaced by frontend applications, which execute directly in the user's browser. Web application performance has been shown to directly affect business performance, and frontend applications enable unique performance improvements. However, building complex applications within the browser is still a new and poorly understood field, and engineering efforts within the field are often plagued by quality issues. This thesis addresses the current research gap around frontend applications, by investigating the applicability of isolation mechanisms available in browsers to frontend application architecture. We review the important publications around the topic, forming an overview of current research, and current best practices in the field. We use this understanding, combined with relevant industry experience, to categorize the available isolation mechanisms to four classes: state and variable isolation, isolation from the DOM, isolation within the DOM, and execution isolation. For each class, we provide background and concrete examples on both the related quality issues, as well as tools for their mitigation. Finally, we use the ISO 25010 quality standard to evaluate the impact of these isolation mechanisms on frontend application quality. Our results suggest that the application of the previously introduced isolation mechanisms has the potential to significantly improve several key areas of frontend application quality, most importantly compatibility and maintainability, but also performance and security. Many of these mechanisms also imply tradeoffs between other quality attributes, most commonly performance. Future work could include developing frontend application architectures that leverage these isolation mechanisms to their full potential.PerinteisiÀ palvelinorientoituneita verkko-ohjelmistoja korvataan kiihtyvÀllÀ vauhdilla selainpohjaisilla ohjelmistoilla. Verkko-ohjelmistojen suorituskyvyn on osoitettu vaikuttavan suoraan yritysten tulokseen, ja selainpohjaiset ohjelmistot mahdollistavat huomattavia parannuksia suorituskykyyn. Monimutkaisten selainpohjaisten ohjelmistojen rakentaminen on kuitenkin uusi ja huonosti ymmÀrretty ala, ja sillÀ tapahtuva kehitystyö on ollut laatuongelmien piinaamaa. TÀssÀ diplomityössÀ tÀydennetÀÀn puutteellista tutkimusta selainpohjaisista ohjelmistoista tutkimalla selaimista löytyvien eristysmekanismien soveltuvuutta nÀiden ohjelmistojen arkkitehtuurin parantamiseen. KÀymme lÀpi tÀrkeimmÀt alan julkaisut muodostaen yleiskuvan tutkimuksen tilasta ja parhaiksi katsotuista kÀytÀnnöistÀ alan harjoittajien keskuudessa. YhdistÀmÀllÀ kirjallisuuskatsauksen tulokset omaan työkokemukseemme alalta, luokittelemme selainten kÀytettÀvissÀ olevat eristysmekanismit neljÀÀn kategoriaan: tilan ja muuttujien eristÀminen, eristÀminen DOM:ista, eristÀminen DOM:in sisÀllÀ sekÀ suorituksen eristÀminen. KÀsittelemme tÀmÀn jÀlkeen löydetyt kategoriat sekÀ esitÀmme niihin liittyviÀ konkreettisia laatuongelmia sekÀ työkaluja nÀiden ongelmien ratkaisuun. Lopuksi arvioimme nÀiden eristysmekanismien vaikutusta selainpohjaisten ohjelmistojen laatuun ISO 25010 -laatustandardin avulla. Tuloksemme osoittavat ettÀ työssÀ esitettyjen eristysmekanismien kÀyttö saattaisi parantaa ohjelmistojen laatua usealla tÀrkeÀllÀ alueella. NÀistÀ merkittÀvimpiÀ ovat yhteensopivuus ja yllÀpidettÀvyys, mutta hyötyjÀ voitaisiin saada myös suorituskyvyn sekÀ tietoturvan parantumisella. Toisaalta monet esitellyistÀ mekanismeista myös vaativat kompromisseja muiden laatuvaatimusten osalta. Jatkotutkimusta tarvittaisiin selainpohjaisista arkkitehtuureista, jotka hyödyntÀisivÀt paremmin työssÀ esitettyjÀ eristysmekanismeja

    XML-based approaches for the integration of heterogeneous bio-molecular data

    Get PDF
    Background: The today's public database infrastructure spans a very large collection of heterogeneous biological data, opening new opportunities for molecular biology, bio-medical and bioinformatics research, but raising also new problems for their integration and computational processing. Results: In this paper we survey the most interesting and novel approaches for the representation, integration and management of different kinds of biological data by exploiting XML and the related recommendations and approaches. Moreover, we present new and interesting cutting edge approaches for the appropriate management of heterogeneous biological data represented through XML. Conclusion: XML has succeeded in the integration of heterogeneous biomolecular information, and has established itself as the syntactic glue for biological data sources. Nevertheless, a large variety of XML-based data formats have been proposed, thus resulting in a difficult effective integration of bioinformatics data schemes. The adoption of a few semantic-rich standard formats is urgent to achieve a seamless integration of the current biological resources. </p
    • 

    corecore