929 research outputs found
Recent Findings and Open Issues concerning the Seismic Behaviour of Masonry Infill Walls in RC Buildings
The extension of the damages observed after the last major earthquakes shows that the seismic risk mitigation of infilled reinforced concrete structures is a paramount topic in seismic prone regions. In the assessment of existing structures and the design of new ones, the infill walls are considered as nonstructural elements by most of the seismic codes and, generally, comprehensive provisions for practitioners are missing. However, nowadays, it is well recognized by the community the importance of the infills in the seismic behaviour of the reinforced concrete structures. Accurate modelling strategies and appropriate seismic assessment methodologies are crucial to understand the behaviour of existing buildings and to develop efficient and appropriate mitigation measures to prevent high level of damages, casualties, and economic losses. The development of effective strengthening solutions to improve the infill seismic behaviour and proper analytical formulations that could help design engineers are still open issues, among others, on this topic. The main aim of this paper is to provide a state-of-the-art review concerning the typologies of damages observed in the last earthquakes where the causes and possible solutions are discussed. After that, a review of in-plane and out-of-plane testing campaigns from the literature on infilled reinforced concrete frames are presented as well as their relevant findings. The most common strengthening solutions to improve the seismic behaviour are presented, and some examples are discussed. Finally, a brief summary of the modelling strategies available in the literature is presented
An Efficient MCMC Approach to Energy Function Optimization in Protein Structure Prediction
Protein structure prediction is a critical problem linked to drug design,
mutation detection, and protein synthesis, among other applications. To this
end, evolutionary data has been used to build contact maps which are
traditionally minimized as energy functions via gradient descent based schemes
like the L-BFGS algorithm. In this paper we present what we call the
Alternating Metropolis-Hastings (AMH) algorithm, which (a) significantly
improves the performance of traditional MCMC methods, (b) is inherently
parallelizable allowing significant hardware acceleration using GPU, and (c)
can be integrated with the L-BFGS algorithm to improve its performance. The
algorithm shows an improvement in energy of found structures of 8.17% to 61.04%
(average 38.9%) over traditional MH and 0.53% to 17.75% (average 8.9%) over
traditional MH with intermittent noisy restarts, tested across 9 proteins from
recent CASP competitions. We go on to map the Alternating MH algorithm to a
GPGPU which improves sampling rate by 277x and improves simulation time to a
low energy protein prediction by 7.5x to 26.5x over CPU. We show that our
approach can be incorporated into state-of-the-art protein prediction pipelines
by applying it to both trRosetta2's energy function and the distogram component
of Alphafold1's energy function. Finally, we note that specially designed
probabilistic computers (or p-computers) can provide even better performance
than GPUs for MCMC algorithms like the one discussed here.Comment: 10 pages, 4 figure
Developing printable thermoelectric materials based on graphene nanoplatelet/ethyl cellulose nanocomposites
Thermoelectric (TE) materials have drawn a lot of attention as a promising technology to harvest waste heat and convert it into electrical energy. However, the toxicity and expense of inorganic TE materials along with high-temperature fabrication processes have limited their application. Additionally, the reduction of raw material resources, such as metals and petroleum is another limiting factor. Hence, developing low-cost, stable, and easily-created TE materials from renewable resources is attracting more and more interest for a wide range of applications including the internet of things and self-powered sensors. Herein, an efficacious processing strategy to fabricate printable TE materials has been developed with Ethyl cellulose (EC), a non-conducting polymer, as the polymer matrix and with Graphene nanoplatelets (GNPs) as fillers. EC, one of the cellulose's derivatives, has been widely used as a binder in the printing pastes. The conductive pastes with different filler contents have been fabricated. The weight ratio of GNPs and EC were ranged from 0.2 to 0.7. These conductive pastes have been deposited by blade coating on glass substrates. The electrical conductivity of the composites has increased polynomially as the filler content increased, whereas the Seebeck coefficient did not change significantly with the increased electrical conductivity. The highest electrical conductivity at room temperature (355.4 S m−1) was obtained for the ratio of 0.7. This ratio also had the maximum power factor value. Moreover, a 3D structure form (cylindrical pellet) from the highest conductive paste was also fabricated. The proposed technique demonstrates an industrially feasible approach to fabricate different geometries and structures for organic TE modules. So, this approach could provide a good reference for the production of high efficiency, low-temperature, lightweight, low-cost, TE materials
A convergent decomposition algorithm for support vector machines.
In this work we consider nonlinear minimization problems with a single linear equality constraint and box constraints. In particular we are interested in solving problems where the number of variables is so huge that traditional optimization methods cannot be directly applied. Many interesting real world problems lead to the solution of large scale constrained problems with this structure. For example, the special subclass of problems with convex quadratic objective function plays a fundamental role in the training of Support Vector Machine, which is a technique for machine learning problems. For this particular subclass of convex quadratic problem, some convergent decomposition methods, based on the solution of a sequence of smaller subproblems, have been proposed. In this paper we define a new globally convergent decomposition algorithm that differs from the previous methods in the rule for the choice of the subproblem variables and in the presence of a proximal point modification in the objective function of the subproblems. In particular, the new rule for sequentially selecting the subproblems appears to be suited to tackle large scale problems, while the introduction of the proximal point term allows us to ensure the global convergence of the algorithm for the general case of nonconvex objective function. Furthermore, we report some preliminary numerical results on support vector classification problems with up to 100 thousands variables
Development of a novel CO2splitting fixed-bed reactor based on copper-doped cerium oxide
Global warming has received widespread attention in recent years due to the accumulation of carbon dioxide. Looking at the current energy landscape, new technologies must be developed to reduce CO2 emissions. The present work is aimed to develop and test a new prototype of an innovative reactor for the conversion of CO2 into CO, operating according to a two-phase thermochemical cycle. The innovative and main aspect of this study was the use of a reactor coupled with a new type of catalyst, a copper-doped cerium oxide (Cuδ+2Ce(1-δ)O2), which allowed to decrease the temperature of the reaction up to 850°C, much lower than the models present in the literature, tested on 1300/1400°C and even beyond
How Companies Restrain Means–Ends Decoupling: A Comparative Case Study of CSR Implementation
We use the concept of means–ends decoupling to examine why companies continue to be major contributors to environmental and social problems despite committing increasingly to corporate social responsibility (CSR). Specifically, we ask: How do companies restrain (versus fail to restrain) means–ends decoupling? We answer this question through a comparative case study of four multinational companies with different levels of means–ends decoupling. Based on interviews and secondary data, we inductively identify two distinct approaches to CSR implementation: experimental vs. consistency-oriented CSR implementation. Experimental CSR implementation means that companies (1) produce CSR knowledge about what is happening in specific CSR contexts and use this knowledge to (2) adapt CSR practices to local circumstances – an interplay that restrains means–ends decoupling. Consistency-oriented CSR implementation lacks this interplay between knowledge production and practice adaptation, which fosters means–ends decoupling. Our model of experimental versus consistency-oriented CSR implementation advances two streams of research. First, we advance research on means–ends decoupling by highlighting the importance of experimentation for restraining means–ends decoupling. Second, we advance research on the impact of CSR activities by questioning the widespread assumption that consistency should be at the heart of CSR implementation
An Investigation of Clustering Algorithms in the Identification of Similar Web Pages
In this paper we investigate the effect of using clustering algorithms in the reverse engineering field to identify pages that are similar either at the structural level or at the content level. To this end, we have used two instances of a general process that only differ for the measure used to compare web pages. In particular, two web pages at the structural level and at the content level are compared by using the Levenshtein edit distances and Latent Semantic Indexing, respectively. The static pages of two web applications and one static web site have been used to compare the results achieved by using the considered clustering algorithms both at the structural and content level. On these applications we generally achieved comparable results. However, the investigation has also suggested some heuristics to quickly identify the best partition of web pages into clusters among the possible partitions both at the structural and at the content level
Gaussian Approximation Potentials: the accuracy of quantum mechanics, without the electrons
We introduce a class of interatomic potential models that can be
automatically generated from data consisting of the energies and forces
experienced by atoms, derived from quantum mechanical calculations. The
resulting model does not have a fixed functional form and hence is capable of
modeling complex potential energy landscapes. It is systematically improvable
with more data. We apply the method to bulk carbon, silicon and germanium and
test it by calculating properties of the crystals at high temperatures. Using
the interatomic potential to generate the long molecular dynamics trajectories
required for such calculations saves orders of magnitude in computational cost.Comment: v3-4: added new material and reference
- …