680 research outputs found

    Finite element simulation of the airbag deployment in frontal impacts

    Get PDF
    Virtual modeling and simulation are increasingly used to help develop restraint systems, and airbag simulation is the necessary steps during airbag research and design progress. In this work, the squeezed airbag has been simulated by a uniform pressure method in which the pressure of the airbag is considered as constant. The main aim of this study is evaluate the performance of deploying of passenger side airbag using finite element methods (FEM) to handle different collision scenarios

    Life-Cycling of Cancer: New Concept

    Get PDF

    Development and validation of a food photography manual, as a tool for estimation of food portion size in epidemiological dietary surveys in Tunisia

    Get PDF
    Background: Estimation of food portion sizes has always been a challenge in dietary studies on free-living individuals. The aim of this work was to develop and validate a food photography manual to improve the accuracy of the estimated size of consumed food portions.Methods: A manual was compiled from digital photos of foods commonly consumed by the Tunisian population. The food was cooked and weighed before taking digital photographs of three portion sizes. The manual was validated by comparing the method of 24-hour recall (using photos) to the reference method [food weighing (FW)]. In both the methods, the comparison focused on food intake amounts as well as nutritional issues. Validity was assessed by BlandAltman limits of agreement. In total, 31 male and female volunteers aged 989 participated in the study.Results: We focused on eight food categories and compared their estimated amounts (using the 24-hour recall method) to those actually consumed (using FW). Animal products and sweets were underestimated, whereas pasta, bread, vegetables, fruits, and dairy products were overestimated. However, the difference between the two methods is not statistically significant except for pasta (pB0.05) and dairy products (pB0.05). The coefficient of correlation between the two methods is highly significant, ranging from 0.876 for pasta to 0.989 for dairy products. Nutrient intake calculated for both methods showed insignificant differences except for fat (pB0.001) and dietary fiber (pB0.05). A highly significant correlation was observed between the two methods for all micronutrients. The test agreement highlights the lack of difference between the two methods.Conclusion: The difference between the 24-hour recall method using digital photos and the weighing method is acceptable. Our findings indicate that the food photography manual can be a useful tool for quantifying food portion sizes in epidemiological dietary surveys.Keywords: food portion sizes; Tunisia; weighed foods; 24-hour recall; portion size photographs; portion size estimatio

    A comparison of the public's use of PPE and strategies to avoid contagion during the COVID-19 pandemic in Australia and Germany

    Get PDF
    The SARS-CoV-2 or COVID-19 pandemic has raised public awareness around disease protection. The aims in this study were to recruit participants from Australia and Germany to determine their use of personal protective equipment and COVID-19 avoidance strategies using scales designed for this study. Principal components analysis with the Australian data revealed two factors in the Protection from Infection Scale, Self-Care and Protective Behaviors, and a single factor in the Infection Avoidance Scale, with each scale demonstrating strong internal reliability. Data from German participants were used to confirm the scales' structure using confirmatory factor analysis. A comparison of the two data sets data revealed that Australian participants scored higher overall on protection and avoidance strategies but at the item level there were several commonalities, including self-care behaviors people adopted to avoid contracting COVID-19. With no foreseeable end to this pandemic, it is important that follow-up studies ascertain whether the public continues to adopt high levels of PPE use and follows government advice or if pandemic fatigue sets in. © 2021 John Wiley & Sons Australia, Ltd

    Ostatci β-laktamskih i tetraciklinskih antibiotika u kravljem mlijeku na području Konstantina, Alžir

    Get PDF
    The aim of the present study was to investigate the βeta-lactam and tetracycline antibiotic residues in cow milk samples. A total of 122 samples of cow milk were collected from raw milk collectors (109 samples) and from a reconstituted pasteurized milk sales clerk (13 samples) in the Constantine region, Algeria and examined using the ΒetaStar Combo screening kit (Neogen, USA). Results indicates that 13 samples (10.66%) were positive for antibiotics residues: 12 (9.84%) for βeta-lactams (ten (8.20%) raw and two (1.64%) pasteurized milk samples) and only one (0.82%) for tetracyclines in a raw milk sample. It is evident that the Algerian consumer is not sheltered from the danger of antibiotic residues in milk and these inhibitor residues should constitute a constant concern for the dairy industry in Algeria. A control programme should be established.Cilj je ove studije bio ispitati ostatke β-laktamskih i tetraciklinskih antibiotika u uzorcima kravljeg mlijeka. U tu su svrhu prikupljena 122 uzorka kravljeg mlijeka od otkupljivača sirovog mlijeka (109 uzoraka) i od prodavača rekonstituiranog pasteriziranog mlijeka (13 uzoraka) na području Konstantina u Alžiru, a ispitani su pomoću ΒetaStar Combo kompleta za testiranje (Neogen, SAD). Rezultati su pokazali da je od svih analiziranih uzoraka trinaest (10,66 %) bilo pozitivno na ostatke antibiotika, dvanaest (9,84 %) na β-laktame (deset (8,20 %) sirovog mlijeka i dva (1,64 %) pasteriziranog mlijeka) i samo jedan uzorak (0,82 %) na tetracikline u sirovog mlijeka. Očito je da alžirski potrošači mlijeka nisu zaštićeni od opasnosti ostataka antibiotika u mlijeku pa bi ostatci tih inhibitora trebali biti razlog zabrinutosti za mliječnu industriju u Alžiru te je potrebno uspostaviti jači program kontrole

    Analyse de dépendance des programmes à objet en utilisant les modèles probabilistes des entrées

    Get PDF
    La tâche de maintenance ainsi que la compréhension des programmes orientés objet (OO) deviennent de plus en plus coûteuses. L’analyse des liens de dépendance peut être une solution pour faciliter ces tâches d’ingénierie. Cependant, analyser les liens de dépendance est une tâche à la fois importante et difficile. Nous proposons une approche pour l'étude des liens de dépendance internes pour des programmes OO, dans un cadre probabiliste, où les entrées du programme peuvent être modélisées comme un vecteur aléatoire, ou comme une chaîne de Markov. Dans ce cadre, les métriques de couplage deviennent des variables aléatoires dont les distributions de probabilité peuvent être étudiées en utilisant les techniques de simulation Monte-Carlo. Les distributions obtenues constituent un point d’entrée pour comprendre les liens de dépendance internes entre les éléments du programme, ainsi que leur comportement général. Ce travail est valable dans le cas où les valeurs prises par la métrique dépendent des entrées du programme et que ces entrées ne sont pas fixées à priori. Nous illustrons notre approche par deux études de cas.The task of maintenance and understanding of object-oriented programs is becoming increasingly costly. Dependency analysis can be a solution to facilitate this engineering task. However, dependency analysis is a task both important and difficult. We propose a framework for studying program internal dependencies in a probabilistic setting, where the program inputs are modeled either as a random vector, or as a Markov chain. In that setting, coupling metrics become random variables whose probability distributions can be studied via Monte-Carlo simulation. The obtained distributions provide an entry point for understanding the internal dependencies of program elements, as well as their general behaviour. This framework is appropriate for the (common) situation where the value taken by the metric does depend on the program inputs and where those inputs are not fixed a priori. We provide a concrete illustration with two case studies

    Diversified query expansion

    Get PDF
    La diversification des résultats de recherche (DRR) vise à sélectionner divers documents à partir des résultats de recherche afin de couvrir autant d’intentions que possible. Dans les approches existantes, on suppose que les résultats initiaux sont suffisamment diversifiés et couvrent bien les aspects de la requête. Or, on observe souvent que les résultats initiaux n’arrivent pas à couvrir certains aspects. Dans cette thèse, nous proposons une nouvelle approche de DRR qui consiste à diversifier l’expansion de requête (DER) afin d’avoir une meilleure couverture des aspects. Les termes d’expansion sont sélectionnés à partir d’une ou de plusieurs ressource(s) suivant le principe de pertinence marginale maximale. Dans notre première contribution, nous proposons une méthode pour DER au niveau des termes où la similarité entre les termes est mesurée superficiellement à l’aide des ressources. Quand plusieurs ressources sont utilisées pour DER, elles ont été uniformément combinées dans la littérature, ce qui permet d’ignorer la contribution individuelle de chaque ressource par rapport à la requête. Dans la seconde contribution de cette thèse, nous proposons une nouvelle méthode de pondération de ressources selon la requête. Notre méthode utilise un ensemble de caractéristiques qui sont intégrées à un modèle de régression linéaire, et génère à partir de chaque ressource un nombre de termes d’expansion proportionnellement au poids de cette ressource. Les méthodes proposées pour DER se concentrent sur l’élimination de la redondance entre les termes d’expansion sans se soucier si les termes sélectionnés couvrent effectivement les différents aspects de la requête. Pour pallier à cet inconvénient, nous introduisons dans la troisième contribution de cette thèse une nouvelle méthode pour DER au niveau des aspects. Notre méthode est entraînée de façon supervisée selon le principe que les termes reliés doivent correspondre au même aspect. Cette méthode permet de sélectionner des termes d’expansion à un niveau sémantique latent afin de couvrir autant que possible différents aspects de la requête. De plus, cette méthode autorise l’intégration de plusieurs ressources afin de suggérer des termes d’expansion, et supporte l’intégration de plusieurs contraintes telles que la contrainte de dispersion. Nous évaluons nos méthodes à l’aide des données de ClueWeb09B et de trois collections de requêtes de TRECWeb track et montrons l’utilité de nos approches par rapport aux méthodes existantes.Search Result Diversification (SRD) aims to select diverse documents from the search results in order to cover as many search intents as possible. For the existing approaches, a prerequisite is that the initial retrieval results contain diverse documents and ensure a good coverage of the query aspects. In this thesis, we investigate a new approach to SRD by diversifying the query, namely diversified query expansion (DQE). Expansion terms are selected either from a single resource or from multiple resources following the Maximal Marginal Relevance principle. In the first contribution, we propose a new term-level DQE method in which word similarity is determined at the surface (term) level based on the resources. When different resources are used for the purpose of DQE, they are combined in a uniform way, thus totally ignoring the contribution differences among resources. In practice the usefulness of a resource greatly changes depending on the query. In the second contribution, we propose a new method of query level resource weighting for DQE. Our method is based on a set of features which are integrated into a linear regression model and generates for a resource a number of expansion candidates that is proportional to the weight of that resource. Existing DQE methods focus on removing the redundancy among selected expansion terms and no attention has been paid on how well the selected expansion terms can indeed cover the query aspects. Consequently, it is not clear how we can cope with the semantic relations between terms. To overcome this drawback, our third contribution in this thesis aims to introduce a novel method for aspect-level DQE which relies on an explicit modeling of query aspects based on embedding. Our method (called latent semantic aspect embedding) is trained in a supervised manner according to the principle that related terms should correspond to the same aspects. This method allows us to select expansion terms at a latent semantic level in order to cover as much as possible the aspects of a given query. In addition, this method also incorporates several different external resources to suggest potential expansion terms, and supports several constraints, such as the sparsity constraint. We evaluate our methods using ClueWeb09B dataset and three query sets from TRECWeb tracks, and show the usefulness of our proposed approaches compared to the state-of-the-art approaches
    • …
    corecore