13 research outputs found

    Achieving Quality through Software Maintenance and Evolution: on the role of Agile Methodologies and Open Source Software

    Get PDF
    Agile methodologies, open source software development, and emerging new technologies are at the base of disruptive changes in software engineering. Being effort estimation pivotal for effective project management in the agile context, in the first part of the thesis we contribute to improve effort estimation by devising a real-time story point classifier, designed with the collaboration of an industrial partner and by exploiting publicly available data on open source projects. We demonstrate that, after an initial training on at least 300 issue reports, the classifier estimates a new issue in less than 15 seconds with a mean magnitude of relative error between 0.16 and 0.61. In addition, issue type, summary, description, and related components prove to be project-dependent features pivotal for story point estimation. Since story points are the most popular effort estimation metric in the agile context, in the second study presented in the thesis we investigate the role of agile methodologies in software maintenance and evolution, and prove its undoubted influence on the refactoring research field over the last 15 years. In the later part of the thesis, we focus on recent technologies to understand their impact on software engineering. We start by proposing a specialized blockchain-oriented software engineering, on the basis of the peculiar challenges the blockchain sector must confront with and statistical data retrieved from a corpus of open source blockchain-oriented software repositories, identified relying upon the 2016 Moody’s Blockchain Report. We advocate the need for new professional roles, enhanced security and reliability, novel modeling languages, and specialized metrics, along with new research directions focusing on collaboration among large teams, testing, and specialized tools for the creation of smart contracts. Along with the blockchain, in the later part of this work we also study the growing mobile sector. More specifically, we focus on the relationships between software defects and the use of the underlying system API, proving that our findings are aligned with those in the literature, namely, that the applications which are more connected to API classes are also more defect-prone. Finally, in the last work presented in the dissertation, we conducted a statistical analysis of 20 open source object-oriented systems, 10 written in the highly popular language Java and 10 in the rising language Python. We leveraged two statistical distribution functions–the log-normal and the double Pareto distributions–to provide good fits, both in Java and Python, for three metrics, namely, the NOLM, NOM, and NOS metrics. The study, among other findings, revealed that the variability of the number of methods used in Python classes is lower than in Java classes, and that Java classes, on average, feature fewer lines of code than Python classes

    Concepts for handling heterogeneous data transformation logic and their integration with TraDE middleware

    Get PDF
    The concept of programming-in-the-Large became a substantial part of modern computerbased scientific research with an advent of web services and the concept of orchestration languages. While the notions of workflows and service choreographies help to reduce the complexity by providing means to support the communication between involved participants, the process still remains generally complex. The TraDE Middleware and underlying concepts were introduced in order to provide means for performing the modeled data exchange across choreography participants in a transparent and automated fashion. However, in order to achieve both transparency and automation, the TraDE Middleware must be capable of transforming the data along its path. The data transformation’s transparency can be difficult to achieve due to various factors including the diversity of required execution environments and complicated configuration processes as well as the heterogeneity of data transformation software which results in tedious integration processes often involving the manual wrapping of software. Having a method of handling data transformation applications in a standardized manner can help to simplify the process of modeling and executing scientific service choreographies with the TraDE concepts applied. In this master thesis we analyze various aspects of this problem and conceptualize an extensible framework for handling the data transformation applications. The resulting prototypical implementation of the presented framework provides means to address data transformation applications in a standardized manner

    Using Crowd-Based Software Repositories to Better Understand Developer-User Interactions

    Get PDF
    Software development is a complex process. To serve the final software product to the end user, developers need to rely on a variety of software artifacts throughout the development process. The term software repository used to denote only containers of source code such as version control systems; more recent usage has generalized the concept to include a plethora of software development artifact kinds and their related meta-data. Broadly speaking, software repositories include version control systems, technical documentation, issue trackers, question and answer sites, distribution information, etc. The software repositories can be based on a specific project (e.g., bug tracker for Firefox), or be crowd-sourced (e.g., questions and answers on technical Q&A websites). Crowd-based software artifacts are created as by-products of developer-user interactions which are sometimes referred to as communication channels. In this thesis, we investigate three distinct crowd-based software repositories that follow different models of developer-user interactions. We believe through a better understanding of the crowd-based software repositories, we can identify challenges in software development and provide insights to improve the software development process. In our first study, we investigate Stack Overflow. It is the largest collection of programming related questions and answers. On Stack Overflow, developers interact with other developers to create crowd-sourced knowledge in the form of questions and answers. The results of the interactions (i.e., the question threads) become valuable information to the entire developer community. Prior research on Stack Overflow tacitly assume that questions receives answers directly on the platform and no need of interaction is required during the process. Meanwhile, the platform allows attaching comments to questions which forms discussions of the question. Our study found that question discussions occur for 59.2% of questions on Stack Overflow. For discussed and solved questions on Stack Overflow, 80.6% of the questions have the discussion begin before the accepted answer is submitted. The results of our study show the importance and nuances of interactions in technical Q&A. We then study dotfiles, a set of publicly shared user-specific configuration files for software tools. There is a culture of sharing dotfiles within the developer community, where the idea is to learn from other developers’ dotfiles and share your variants. The interaction of dotfiles sharing can be viewed as developers sources information from other developers, adapt the information to their own needs, and share their adaptations back to the community. Our study on dotfiles suggests that is a common practice among developers to share dotfiles where 25.8% of the most stared users on GitHub have a dotfiles repository. We provide a taxonomy of the commonly tracked dotfiles and a qualitative study on the commits in dotfiles repositories. We also leveraged the state-of-the-art time-series clustering technique (K-shape) to identify code churn pattern for dotfile edits. This study is the first step towards understanding the practices of maintaining and sharing dotfiles. Finally, we study app stores, the platforms that distribute software products and contain many non-technical attributes (e.g., ratings and reviews) of software products. Three major stakeholders interacts with each other in app stores: the app store owner who governs the operation of the app store; developers who publish applications on the app store; and users who browse and download applications in the app store. App stores often provide means of interaction between all three actors (e.g., app reviews, store policy) and sometimes interactions with in the same actor (e.g., developer forum). We surveyed existing app stores to extract key features from app store operation. We then labeled a representative set of app store collected by web queries. K-means is applied to the labeled app stores to detect natural groupings of app stores. We observed a diverse set of app stores through the process. Instead of a single model that describes all app stores, fundamentally, our observations show that app stores operates differently. This study provide insights in understanding how app stores can affect software development. In summary, we investigated software repositories containing software artifacts created from different developer-user interactions. These software repositories are essential for software development in providing referencing information (i.e., Stack Overflow), improving development productivity (i.e., dotfiles), and help distributing the software products to end users (i.e., app stores)

    Identifying reusable knowledge in developer instant messaging communication.

    Get PDF
    Context and background: Software engineering is a complex and knowledge-intensive activity. Required knowledge (e.g., about technologies, frameworks, and design decisions) changes fast and the knowledge needs of those who design, code, test and maintain software constantly evolve. On the other hand, software developers use a wide range of processes, practices and tools where developers explicitly and implicitly “produce” and capture different types of knowledge. Problem: Software developers use instant messaging tools (e.g., Slack, Microsoft Teams and Gitter) to discuss development-related problems, share experiences and to collaborate in projects. This communication takes place in chat rooms that accumulate potentially relevant knowledge to be reused by other developers. Therefore, in this research we analyze whether there is reusable knowledge in developer instant messaging communication by exploring (a) which instant messaging platforms can be a source of reusable knowledge, and (b) software engineering themes that represent the main discussions of developers in instant messaging communication. We also analyze how this reusable knowledge can be identified with the use of topic modeling (a natural language processing technique to discover abstract topics in text) by (c) surveying the literature on how topic modeling has been applied in software engineering research, and (d) evaluating how topic models perform with developer instant messages. Method: First, we conducted a Field Study through an exploratory case study and a reflexive thematic analysis to check whether there is reusable knowledge in developer instant messaging communication, and if so, what this knowledge (main themes discussed) is. Then, we conducted a Sample Study to explore how reusable knowledge in developer instant messaging communication can we identified. In this study, we applied a literature survey and software repository mining (i.e. short text topic modeling). Findings and contributions: We (a) developed a comparison framework for instant messaging tools, (b) identified a map of the main themes discussed in chat rooms of an instant messaging tool (Gitter, a platform used by software developers), (c) provided a comprehensive literature review that offers insights and references on the use of topic modeling in software engineering, and (d) provided an evaluation of the performance of topic models applied to developer instant messages based on topic coherence metrics and human judgment for topic quality

    Process Models for Learning Patterns in FLOSS Repositories

    Get PDF
    Evidence suggests that Free/Libre Open Source Software (FLOSS) environments provide unlimited learning opportunities. Community members engage in a number of activities both during their interaction with their peers and while making use of these environments’ repositories. To date, numerous studies document the existence of learning processes in FLOSS through surveys or by means of questionnaires filled by FLOSS projects participants. At the same time, there is a surge in developing tools and techniques for extracting and analyzing data from different FLOSS data sources that has birthed a new field called Mining Software Repositories (MSR). In spite of these growing tools and techniques for mining FLOSS repositories, there is limited or no existing approaches to providing empirical evidence of learning processes directly from these repositories. Therefore, in this work we sought to trigger such an initiative by proposing an approach based on Process Mining. With this technique, we aim to trace learning behaviors from FLOSS participants’ trails of activities as recorded in FLOSS repositories. We identify the participants as Novices and Experts. A Novice is defined as any FLOSS member that benefits from a learning experience through acquiring new skills while the Expert is the provider of these skills. The significance of our work is mainly twofold. First and foremost, we extend the MSR field by showing the potential of mining FLOSS repositories by applying Process Mining techniques. Lastly, our work provides critical evidence that boosts the understanding of learning behavior in FLOSS communities by analyzing the relevant repositories. In order to accomplish this, we have proposed and implemented a methodology that follows a seven-step approach including developing an appropriate terminology or ontology for learning processes in FLOSS, contextualizing learning processes through a-priori models, generating Event Logs, generating corresponding process models, interpreting and evaluating the value of process discovery, performing conformance analysis and verifying a number of formulated hypotheses with regard to tracing learning patterns in FLOSS communities. The implementation of this approach has resulted in the development of the Ontology of Learning in FLOSS (OntoLiFLOSS) environments that defines the terms needed to describe learning processes in FLOSS as well as providing a visual representation of these processes through Petri net-like Workflow nets. Moreover, another novelty pertains to the mining of FLOSS repositories by defining and describing the preliminaries required for preprocessing FLOSS data before applying Process Mining techniques for analysis. Through a step-by-step process, we effectively detail how the Event Logs are constructed through generating key phrases and making use of Semantic Search. Taking a FLOSS environment called Openstack as our data source, we apply our proposed techniques to identify learning activities based on key phrases catalogs and classification rules expressed through pseudo code as well as the appropriate Process Mining tool. We thus produced Event Logs that are based on the semantic content of messages in Openstack’s Mailing archives, Internet Relay Chat (IRC) messages, Reviews, Bug reports and Source code to retrieve the corresponding activities. Considering these repositories in light of the three learning process phases (Initiation, Progression and maturation), we produced an Event Log for each participant (Novice or Expert) in every phase on the corresponding dataset. Hence, we produced 14 Event Logs that helped build 14 corresponding process maps which are visual representation of the flow occurrence of learning activities in FLOSS for each participant. These process maps provide critical indications that speak volumes in terms of the presence of learning processes in the analyzed repositories. The results show that learning activities do occur at a significant rate during messages exchange on both Mailing archives and IRC messages. The slight differences between the two datasets can be highlighted in two ways. First, the involvement of Experts is more on iv IRC than it is on Mailing archives with 7.22% and 0.36% of Expert involvement respectively on IRC forums and Mailing lists. This can be justified by the differences in the length of messages sent on these two datasets. The average length of sent messages is 3261 characters for an email compared to 60 characters for a chat message. The evidence produced from this mining experiment solidifies the finding in terms of the existence of learning processes in FLOSS as well as the scale at which they occur. While the Initiation phase shows the Novice as the most involved in the start of the learning process, during Progression phase the involvement of the Expert can be seen to be significantly increasing. In order to trace the advanced skills in the Maturation phase, we look at repositories that store data about developing, creating code, examining and reviewing the code, identifying and fixing possible bugs. Therefore, we consider three repositories including Source Code, Bug reports and Reviews. The results obtained in this phase largely justify the choice of these three datasets to track learning behavior at this stage. Both the Bug reports and the Source code demonstrate the commitment of the Novice to seek answers and interact as much as possible in strengthening the acquired skills. With a participation of 49.22% for the Novice against 46.72% for the Expert and 46.19 % against 42.04% respectively on Bug reports and Source code, the Novice still engages significantly in learning. On the last dataset, Reviews, we notice an increase in the Expert’s role. The Expert performs activities to the tune of 40.36 % of total number of activities against 22.17 % for the Novice. The last steps of our methodology steer the comparison of the defined a-priori models with final models that describe how learning processes occur according to the actual behavior from Event Logs. Our attempts to producing process models start with depicting process maps to track the actual behaviour as it occurs in Openstack repositories, before concluding with final Petri net models representative of learning processes in FLOSS as a result of conformance analysis. For every dataset in the corresponding learning phase, we produce 3 process maps respectively depicting the overall learning behaviour for all FLOSS community members (Novice or Expert together), then the Novice and Expert. In total, we produced 21 process maps, empirically describing process models on real data, 14 process models in the form of Petri nets for every participant on each dataset. We make use of the Artificial Immune System (AIS) algorithms to merge the 14 Event Logs that uniquely capture the behaviour of every participant on different datasets in the three phases. We then reanalyze the resulting logs in order to produce 6 global models that inclusively provide a comprehensive depiction of participants’ learning behavior in FLOSS communities. This description hints that Workflow nets introduced as our a-priori models give rather a more simplistic representation of learning processes in FLOSS. Nevertheless, our experiments with Event Logs starting from process discovery to conformance checking from Openstack repositories demonstrate that the real learning behaviors are more complete and most importantly largely submerge these simplistic a-priori models. Finally, our methodology has proved to be effective in both providing a novel alternative for mining FLOSS repositories and providing empirical evidence that describes how knowledge is exchanged in FLOSS environments. Moreover, our results enrich the MSR field by providing a reproducible step-by-step problem solving approach that can be customized to answer subsequent research questions in FLOSS repositories using Process Mining

    An Unexpected Journey: Towards Runtime Verification of Multiagent Systems and Beyond

    Get PDF
    The Trace Expression formalism derives from works started in 2012 and is mainly used to specify and verify interaction protocols at runtime, but other applications have been devised. More specically, this thesis describes how to extend and apply such formalism in the engineering process of distributed articial intelligence systems (such as Multiagent systems). This thesis extends the state of the art through four dierent contributions: 1. Theoretical: the thesis extends the original formalism in order to represent also parametric and probabilistic specications (parametric trace expressions and probabilistic trace expressions respectively). 2. Algorithmic: the thesis proposes algorithms for verifying trace expressions at runtime in a decentralized way. The algorithms have been designed to be as general as possible, but their implementation and experimentation address scenarios where the modelled and observed events are communicative events (interactions) inside a multiagent system. 3. Application: the thesis analyzes the relations between runtime and static verication (e.g. model checking) proposing hybrid integrations in both directions. First of all, the thesis proposes a trace expression model checking approach where it shows how to statically verify LTL property on a trace expression specication. After that, the thesis presents a novel approach for supporting static verication through the addition of monitors at runtime (post-process). 4. Implementation: the thesis presents RIVERtools, a tool supporting the writing, the syntactic analysis and the decentralization of trace expressions

    Intelligent maintenance management in a reconfigurable manufacturing environment using multi-agent systems

    Get PDF
    Thesis (M. Tech.) -- Central University of Technology, Free State, 2010Traditional corrective maintenance is both costly and ineffective. In some situations it is more cost effective to replace a device than to maintain it; however it is far more likely that the cost of the device far outweighs the cost of performing routine maintenance. These device related costs coupled with the profit loss due to reduced production levels, makes this reactive maintenance approach unacceptably inefficient in many situations. Blind predictive maintenance without considering the actual physical state of the hardware is an improvement, but is still far from ideal. Simply maintaining devices on a schedule without taking into account the operational hours and workload can be a costly mistake. The inefficiencies associated with these approaches have contributed to the development of proactive maintenance strategies. These approaches take the device health state into account. For this reason, proactive maintenance strategies are inherently more efficient compared to the aforementioned traditional approaches. Predicting the health degradation of devices allows for easier anticipation of the required maintenance resources and costs. Maintenance can also be scheduled to accommodate production needs. This work represents the design and simulation of an intelligent maintenance management system that incorporates device health prognosis with maintenance schedule generation. The simulation scenario provided prognostic data to be used to schedule devices for maintenance. A production rule engine was provided with a feasible starting schedule. This schedule was then improved and the process was determined by adhering to a set of criteria. Benchmarks were conducted to show the benefit of optimising the starting schedule and the results were presented as proof. Improving on existing maintenance approaches will result in several benefits for an organisation. Eliminating the need to address unexpected failures or perform maintenance prematurely will ensure that the relevant resources are available when they are required. This will in turn reduce the expenditure related to wasted maintenance resources without compromising the health of devices or systems in the organisation

    An Extensible Static Analysis Framework for Automated Analysis, Validation and Performance Improvement of Model Management Programs

    Get PDF
    Model Driven Engineering (MDE) is a state-of-the-art software engineering approach, which adopts models as first class artefacts. In MDE, modelling tools and task-specific model management languages are used to reason about the system under development and to (automatically) produce software artefacts such as working code and documentation. Existing tools which provide state-of-the-art model management languages exhibit the lack of support for automatic static analysis for error detection (especially when models defined in various modelling technologies are involved within a multi-step MDE development process) and for performance optimisation (especially when very large models are involved in model management operations). This thesis investigates the hypothesis that static analysis of model management programs in the context of MDE can help with the detection of potential runtime errors and can be also used to achieve automated performance optimisation of such programs. To assess the validity of this hypothesis, a static analysis framework for the Epsilon family of model management languages is designed and implemented. The static analysis framework is evaluated in terms of its support for analysis of task-specific model management programs involving models defined in different modelling technologies, and its ability to improve the performance of model management programs operating on large models

    Multi-objective Search-based Mobile Testing

    Get PDF
    Despite the tremendous popularity of mobile applications, mobile testing still relies heavily on manual testing. This thesis presents mobile test automation approaches based on multi-objective search. We introduce three approaches: Sapienz (for native Android app testing), Octopuz (for hybrid/web JavaScript app testing) and Polariz (for using crowdsourcing to support search-based mobile testing). These three approaches represent the primary scientific and technical contributions of the thesis. Since crowdsourcing is, itself, an emerging research area, and less well understood than search-based software engineering, the thesis also provides the first comprehensive survey on the use of crowdsourcing in software testing (in particular) and in software engineering (more generally). This survey represents a secondary contribution. Sapienz is an approach to Android testing that uses multi-objective search-based testing to automatically explore and optimise test sequences, minimising their length, while simultaneously maximising their coverage and fault revelation. The results of empirical studies demonstrate that Sapienz significantly outperforms both the state-of-the-art technique Dynodroid and the widely-used tool, Android Monkey, on all three objectives. When applied to the top 1,000 Google Play apps, Sapienz found 558 unique, previously unknown crashes. Octopuz reuses the Sapienz multi-objective search approach for automated JavaScript testing, aiming to investigate whether it replicates the Sapienz’ success on JavaScript testing. Experimental results on 10 real-world JavaScript apps provide evidence that Octopuz significantly outperforms the state of the art (and current state of practice) in automated JavaScript testing. Polariz is an approach that combines human (crowd) intelligence with machine (computational search) intelligence for mobile testing. It uses a platform that enables crowdsourced mobile testing from any source of app, via any terminal client, and by any crowd of workers. It generates replicable test scripts based on manual test traces produced by the crowd workforce, and automatically extracts from these test traces, motif events that can be used to improve search-based mobile testing approaches such as Sapienz

    Systematic construction of goal-oriented COTS taxonomies

    Get PDF
    El proceso de construir software a partir del ensamblaje e integración de soluciones de software pre-fabricadas, conocidas como componentes COTS (Comercial-Off-The-Shelf) se ha convertido en una necesidad estratégica en una amplia variedad de áreas de aplicación. En general, los componentes COTS son componentes de software que proveen una funcionalidad específica, que están disponibles en el mercado para ser adquiridos e integrados dentro de otros sistemas de software. Los beneficios potenciales de esta tecnología son principalmente la reducción de costes y el acortamiento del tiempo de desarrollo, a la vez que fomenta la calidad. Sin embargo, numerosos retos que van desde problemas técnicos y legales deben ser afrontados para adaptar las actividades tradicionales de ingeniería de software para explotar los beneficios del uso de COTS para el desarrollo de sistemas.Actualmente, existe un incrementalmente enorme mercado de componentes COTS; así, una de las actividades más críticas en el desarrollo de sistemas basados en COTS es la selección de componentes que deben ser integrados en el sistema a desarrollar. La selección está básicamente compuesta de dos procesos principales: La búsqueda de componentes candidatos en el mercado y su posterior evaluación con respecto a los requisitos del sistema. Desafortunadamente, la mayoría de los métodos existentes para seleccionar COTS, se enfocan en el proceso de evaluación, dejando de lado el problema de buscar los componentes en el mercado. La búsqueda de componentes en el mercado no es una tarea trivial, teniendo que afrontar varias características del mercado de COTS, tales como su naturaleza dispersa y siempre creciente, cambio y evolución constante; en este contexto, la obtención de información de calidad acerca de los componentes no es una tarea fácil. Como consecuencia, el proceso de selección de COTS se ve seriamente dañado. Además, las alternativas tradicionales de reuso también carecen de soluciones apropiadas para reusar componentes COTS y el conocimiento adquirido en cada proceso de selección. Esta carencia de propuestas es un problema muy serio que incrementa los riesgos de los proyectos de selección de COTS, además de hacerlos ineficientes y altamente costosos. Esta disertación presenta el método GOThIC (Goal- Oriented Taxonomy and reuse Infrastructure Construction) enfocado a la construcción de infraestructuras de reuso para facilitar la búsqueda y reuso de componentes COTS. El método está basado en el uso de objetivos para construir taxonomías abstractas, bien fundamentadas y estables para lidiar con las características del mercado de COTS. Los nodos de las taxonomías son caracterizados por objetivos, sus relaciones son declaradas como dependencias y varios artefactos son construidos y gestionados para promover la reusabilidad y lidiar con la evolución constante.El método GOThIC ha sido elaborado a través de un proceso iterativo de investigación-acción para identificar los retos reales relacionados con el proceso de búsqueda de COTS. Posteriormente, las soluciones posibles fueron evaluadas e implementadas en varios casos de estudio en el ámbito industrial y académico en diversos dominios. Los resultados más relevantes fueron registrados y articulados en el método GOThIC. La evaluación industrial preliminar del método se ha llevado a cabo en algunas compañías en Noruega.The process of building software systems by assembling and integrating pre-packaged solutions in the form of Commercial-Off-The-Shelf (COTS) software components has become a strategic need in a wide variety of application areas. In general, COTS components are software components that provide a specific functionality, available in the market to be purchased, interfaced and integrated into other software systems. The potential benefits of this technology are mainly its reduced costs and shorter development time, while maintaining the quality. Nevertheless, many challenges ranging form technical to legal issues must be faced for adapting the traditional software engineering activities in order to exploit these benefits.Nowadays there is an increasingly huge marketplace of COTS components; therefore, one of the most critical activities in COTS-based development is the selection of the components to be integrated into the system under development. Selection is basically composed of two main processes, namely: searching of candidates from the marketplace and their evaluation with respect to the system requirements. Unfortunately, most of the different existing methods for COTS selection focus their efforts on evaluation, letting aside the problem of searching components in the marketplace. Searching candidate COTS is not an easy task, having to cope with some challenging marketplace characteristics related to its widespread, evolvable and growing nature; and the lack of available and well-suited information to obtain a quality-assured search. Indeed, traditional reuse approaches also lack of appropriate solutions to reuse COTS components and the knowledge gained in each selection process. This lack of proposals is a serious drawback that makes the whole selection process highly risky, and often expensive and inefficient. This dissertation introduces the GOThIC (Goal- Oriented Taxonomy and reuse Infrastructure Construction) method aimed at building a domain reuse infrastructure for facilitating COTS components searching and reuse. It is based on goal-oriented approaches for building abstract, well-founded and stable taxonomies capable of dealing with the COTS marketplace characteristics. Thus, the nodes of these taxonomies are characterized by means of goals, their relationships declared as dependencies among them and several artifacts are constructed and managed for reusability and evolution purposes. The GOThIC method has been elaborated following an iterative process based on action research premises to identify the actual challenges related to COTS components searching. Then, possible solutions were envisaged and implemented by several industrial and academic case studies in different domains. Successful results were recorded to articulate the synergic GOThIC method solution, followed by its preliminary industrial evaluation in some Norwegian companies
    corecore