700 research outputs found
Meta-model Pruning
Large and complex meta-models such as those of Uml and its profiles are growing due to modelling and inter-operability needs of numerous\ud
stakeholders. The complexity of such meta-models has led to coining\ud
of the term meta-muddle. Individual users often exercise only a small\ud
view of a meta-muddle for tasks ranging from model creation to construction\ud
of model transformations. What is the effective meta-model that represents\ud
this view? We present a flexible meta-model pruning algorithm and\ud
tool to extract effective meta-models from a meta-muddle. We use\ud
the notion of model typing for meta-models to verify that the algorithm\ud
generates a super-type of the large meta-model representing the meta-muddle.\ud
This implies that all programs written using the effective meta-model\ud
will work for the meta-muddle hence preserving backward compatibility.\ud
All instances of the effective meta-model are also instances of the\ud
meta-muddle. We illustrate how pruning the original Uml metamodel\ud
produces different effective meta-models
On the Effectiveness of Unit Tests in Test-driven Development
Background: Writing unit tests is one of the primary activities
in test-driven development. Yet, the existing reviews report few
evidence supporting or refuting the effect of this development approach
on test case quality. Lack of ability and skills of developers to
produce sufficiently good test cases are also reported as limitations
of applying test-driven development in industrial practice.
Objective: We investigate the impact of test-driven development
on the effectiveness of unit test cases compared to an incremental
test last development in an industrial context.
Method: We conducted an experiment in an industrial setting
with 24 professionals. Professionals followed the two development
approaches to implement the tasks. We measure unit test effectiveness
in terms of mutation score. We also measure branch and
method coverage of test suites to compare our results with the
literature.
Results: In terms of mutation score, we have found that the test
cases written for a test-driven development task have a higher
defect detection ability than test cases written for an incremental
test-last development task. Subjects wrote test cases that cover
more branches on a test-driven development task compared to the
other task. However, test cases written for an incremental test-last
development task cover more methods than those written for the
second task.
Conclusion: Our findings are different from previous studies
conducted at academic settings. Professionals were able to perform
more effective unit testing with test-driven development. Furthermore,
we observe that the coverage measure preferred in academic
studies reveal different aspects of a development approach. Our
results need to be validated in larger industrial contexts.Istanbul Technical University
Scientific Research Projects (MGA-2017-40712), and the
Academy of Finland (Decision No. 278354)
Analytical expressions for the conductance noise measured with four circular contacts placed in a square array
In the ideal case, noise measurements with four contacts minimize the contribution of the contact interface. There is a need to characterize conductance noise and noise correction factors for the different geometries provided with four contacts, as already is the case for resistivity measurements with van der Pauw structures. Here, we calculate the noise correction factors for two geometries with a pair of sensors and a pair of current driver electrodes placed in a square array. The first geometry investigated is a very large film compared to the distance L between four circular electrodes, which are placed in a square array far away from the borders of the film. The second is a square-shaped conductive film with side length L and provided with four quarter-circle corner contacts with radius l. The effect of the conductance noise in the film can be observed between current free sensors in a four-point measurement or between current carrying drivers in a two-point measurement. Our analytical expressions are based on approximations to solve the integrals (J·)2dA and |J|4dA for the voltage noise measured across a pair of sensors, SVQ, and across the drivers, SVD, respectively. The first and second integrands represent the squared dot product of the current density and adjoint current density and the modulus of the current density to the fourth power, respectively. The current density J in the samples is due to the current I passing through the driver contacts. The calculated expressions are applicable to samples with thickness tl0.1L. Hence, the disturbances in the neighborhood of the sensors on J and of the drivers on are ignored. Noise correction factors for two- and four-point measurements are calculated for sensors on an equipotential (transversal noise) with the driver contacts on the diagonal of a square and for sensors next to each other on one side of the square with the drivers next to each other on the other side of the square (longitudinal noise). In all cases the noise between the sensors is smaller and less sensitive to the contact size 2l/L than the noise between the drivers. The ratio SVQ/SVD becomes smaller with smaller contact radius l. Smaller sensors give a better suppression of interface noise at the contacts. But overly low 2l/L values result in overly high resistance between the sensors and too strong a contribution of thermal noise at the sensors. Therefore, equations are derived to calculate the current level needed to observe 1/f conductance fluctuations on top of the thermal noise. The results from the calculated analytical expressions show good agreement with experimental results obtained from the noise in carbon sheet resistance and numerical results. Transversal noise measurements on a square sample with corner contacts are recommended to characterize the 1/f noise of the layer. This is due to the increased current densities in the sample compared to the open structure, which result in easier detection of the 1/f on top of the thermal noise. ©2007 American Institute of Physic
Hydrophilicity of graphene in water through transparency to polar and dispersive interactions
Supramolecular & Biomaterials Chemistr
A Study and Toolkit for Asynchronous Programming in C#
Asynchronous programming is in demand today, because responsiveness is increasingly important on all modern devices. Yet, we know little about how developers use asynchronous programming in practice. Without such knowledge, developers, researchers, language and library designers, and tool vendors can make wrong assumptions.
We present the first study that analyzes the usage of asynchronous programming in a large experiment. We analyzed 1378 open source Windows Phone (WP) apps, comprising 12M SLOC, produced by 3376 developers. Using this data, we answer 2 research questions about use and misuse of asynchronous constructs. Inspired by these findings, we developed (i) Asyncifier, an automated refactoring tool that converts callback-based asynchronous code to the new async/await; (ii) Corrector, a tool that finds and corrects common misuses of async/await. Our empirical evaluation shows that these tools are (i) applicable and (ii) efficient. Developers accepted 314 patches generated by our tools.published or submitted for publicationpeer reviewe
Flame bands: CO + O chemiluminescence as a measure of gas temperature
Carbon monoxide flame band emission (CO+O → CO2+hV) in CO2 microwave plasma is quantified by obtaining absolute calibrated emission spectra at various locations in the plasma afterglow while simultaneously measuring gas temperatures using rotational Raman scattering. Comparison of our results to literature reveals a contribution of O2 Schumann-Runge UV emission at T > 1500 K. This UV component likely results from the collisional exchange of energy between CO2(1B) and O2. Limiting further analysis to T < 1500 K, we demonstrate the utility of CO flame band emission by analyzing afterglows at different plasma conditions. We show that the highest energy efficiency for CO production coincides with an operating condition where very little heat has been lost to the environment prior to ∼3 cm downstream, while simultaneously, T ends up below the level required to effectively freeze in CO. This observation demonstrates that, in CO2 plasma conversion, optimizing for energy efficiency does not require a sophisticated downstream cooling method.</p
Optimizing the reliability and resource efficiency of MapReduce-based systems
Debido al gran incremento de datos digitales que ha tenido lugar en los últimos años, ha surgido un nuevo paradigma de computación paralela para el procesamiento eficiente de grandes volúmenes de datos. Muchos de los sistemas basados en este paradigma, también llamados sistemas de computación intensiva de datos, siguen el modelo de programación de Google MapReduce. La principal ventaja de los sistemas MapReduce es que se basan en la idea de enviar la computación donde residen los datos, tratando de proporcionar escalabilidad y eficiencia. En escenarios libres de fallo, estos sistemas generalmente logran buenos resultados. Sin embargo, la mayorÃa de escenarios donde se utilizan, se caracterizan por la existencia de fallos. Por tanto, estas plataformas suelen incorporar caracterÃsticas de tolerancia a fallos y fiabilidad. Por otro lado, es reconocido que las mejoras en confiabilidad vienen asociadas a costes adicionales en recursos. Esto es razonable y los proveedores que ofrecen este tipo de infraestructuras son conscientes de ello. No obstante, no todos los enfoques proporcionan la misma solución de compromiso entre las capacidades de tolerancia a fallo (o de manera general, las capacidades de fiabilidad) y su coste. Esta tesis ha tratado la problemática de la coexistencia entre fiabilidad y eficiencia de los recursos en los sistemas basados en el paradigma MapReduce, a través de metodologÃas que introducen el mÃnimo coste, garantizando un nivel adecuado de fiabilidad. Para lograr esto, se ha propuesto: (i) la formalización de una abstracción de detección de fallos; (ii) una solución alternativa a los puntos únicos de fallo de estas plataformas, y, finalmente, (iii) un nuevo sistema de asignación de recursos basado en retroalimentación a nivel de contenedores. Estas contribuciones genéricas han sido evaluadas tomando como referencia la arquitectura Hadoop YARN, que, hoy en dÃa, es la plataforma de referencia en la comunidad de los sistemas de computación intensiva de datos. En la tesis se demuestra cómo todas las contribuciones de la misma superan a Hadoop YARN tanto en fiabilidad como en eficiencia de los recursos utilizados. ABSTRACT Due to the increase of huge data volumes, a new parallel computing paradigm to process big data in an efficient way has arisen. Many of these systems, called dataintensive computing systems, follow the Google MapReduce programming model. The main advantage of these systems is based on the idea of sending the computation where the data resides, trying to provide scalability and efficiency. In failure-free scenarios, these frameworks usually achieve good results. However, these ones are not realistic scenarios. Consequently, these frameworks exhibit some fault tolerance and dependability techniques as built-in features. On the other hand, dependability improvements are known to imply additional resource costs. This is reasonable and providers offering these infrastructures are aware of this. Nevertheless, not all the approaches provide the same tradeoff between fault tolerant capabilities (or more generally, reliability capabilities) and cost. In this thesis, we have addressed the coexistence between reliability and resource efficiency in MapReduce-based systems, looking for methodologies that introduce the minimal cost and guarantee an appropriate level of reliability. In order to achieve this, we have proposed: (i) a formalization of a failure detector abstraction; (ii) an alternative solution to single points of failure of these frameworks, and finally (iii) a novel feedback-based resource allocation system at the container level. Finally, our generic contributions have been instantiated for the Hadoop YARN architecture, which is the state-of-the-art framework in the data-intensive computing systems community nowadays. The thesis demonstrates how all our approaches outperform Hadoop YARN in terms of reliability and resource efficiency
Frontal plane movement of the pelvis and thorax during dynamic activities in individuals with and without anterior cruciate ligament injury
Background For elite athletes with anterior cruciate ligament (ACL) reconstruction, reducing pelvis and trunk obliquities is a common goal of rehabilitation. It is not known if this is also a suitable goal for the general population. This study aimed to quantify pelvis and thorax obliquities during dynamic activities in individuals from the general population with and without history of ACL injury. Methods Retrospective analysis of cross-sectional data from 30 participants with ACL reconstruction, 28 participants with ACL deficiency (ACLD), and 32 controls who performed overground walking and jogging, single-leg squat, and single-leg hop for distance. Pelvis and thorax obliquities were quantified in each activity and compared across groups using one-way ANOVA. Coordination was quantified using cross covariance. Results In the stance phase of walking and jogging, pelvis and thorax obliquities were within ±10° of neutral and there was a negative correlation between the two segments at close to zero phase lag. In single-leg squat and hop, range of obliquities varied across individuals and there was no consistent pattern of coordination. Eight ACLD participants felt unable to perform the single-leg hop. In the remaining participants, range of pelvis (p = 0.04) and thorax (p = 0.02) obliquities was smaller in ACLD than controls. Conclusions In challenging single-leg activities, minimal frontal plane motion was not the typical movement pattern observed in the general population. Coordination between the pelvis and thorax was inconsistent within and across individuals. Care should be taken when considering minimising pelvis and thorax obliquities in patients with ACL injury
Fermi Surface Properties of Low Concentration CeLaB: dHvA
The de Haas-van Alphen effect is used to study angular dependent extremal
areas of the Fermi Surfaces (FS) and effective masses of CeLaB alloys for between 0 and 0.05. The FS of these alloys was previously
observed to be spin polarized at low Ce concentration ( = 0.05). This work
gives the details of the initial development of the topology and spin
polarization of the FS from that of unpolarized metallic LaB to that of
spin polarized heavy Fermion CeB .Comment: 7 pages, 9 figures, submitted to PR
- …