28 research outputs found

    The Presumption of Conviction in Criminal Legislation: A Comparative Study

    Get PDF
    It is known that the accused is innocent until proven guilty by a final judicial decision providing that the claimant submits evidence that he/she is innocent. The above comes as a result of the presumption of the innocence principle that is applicable in most international and national laws. However, the former principle is not absolute; the comparative criminal legislation created an exception to this principle which is designed to exchange roles and make some of the burden of proving the facts rest with the defendant, in the sense that the accused is convicted until he proves his/her innocence and this is what might be called a presumption of conviction. Therefore, this study aims at determining the reality of the presumption of conviction in the penal legislation under consideration. An evaluation of this principle, including justification of this type of evidence, has also been examined. This examination gives clear answers to questions that may be raised about the legitimacy of the presumption of condemnation in view of the international conventions and the general principles of the presumption of innocence. The examination of this principle has been carried through comparative legislation, whether in the form of legal evidence to assume the moral or material element of the crime, or in the form of evidence the judge can deduce through the circumstances surrounding the incident. Thus, this study, as a descriptive analysis of the presumption of condemnation in penal comparative legislation, has concluded with some conclusions and recommendations

    Clustering over the Cultural Heritage Linked Open Dataset: Xlendi Shipwreck

    Get PDF
    International audienceCultural heritage (CH) resources are very diverse, heterogeneous , discontinuous and subject to possible updates and revisions in nature. The use of semantic web technologies associated with 3D graph-ical tools is proposed to improve the access, the exploration, the mining and the enrichment of this CH data in a standardized and more struc-tured form. This paper presents a new ontology-based tool that allows to visualize spatial clustering over 3D distribution of CH artifacts. The data that we are processing consists of the archaeological shipwreck "Xlendi, Malta", which was collected by photogrammtry and modeled by the Ar-penteur ontology. Following semantic web best practices, the produced CH dataset was published as linked open data (LOD)

    The Quality of Effective University Education From the Viewpoint of Students of Special Education Department

    Get PDF
    This study aims to identify the quality level of effective learning of special education students, and determining the differences at this level of quality according to the variables of gender, specialty and academic year, for 235 of special education students. After applying the quality scale of university effective education, the results showed that special education students have had above average level of effective university education quality. The Results have also shown that there were no differences between students according to the study variables

    High-efficiency DNA extraction using poly(4,4′-cyclohexylidene bisphenol oxalate)-modified microcrystalline cellulose magnetite composite

    Get PDF
    In this study, we studied the DNA extraction capability of poly(4,4-cyclohexylidene bisphenol oxalate) following the surface modification and composite formation with that of microcrystalline cellulose (MCC) and magnetic iron oxide nanoparticles (NPs). The physical characterization techniques like scanning electron microscopy (SEM), Fourier-transform infrared (FTIR) spectroscopy, energy-dispersive X-ray analysis (EDX), and thermogravimetric analysis (TGA) were employed for the poly(bisphenol Z oxalate)-MCC-magnetite composite during different stages of its formation. The results confirmed the successful modification of the polymer surface. On testing in the presence of three types of binding buffers, a high value of 72.4% (out of 10,000 ng/μL) efficiency with a total yield of DNA at ng and absorbance ratio of A260/A280 (1.980) was observed for the 2 M GuHCl/EtOH binding buffer. These results were compared against the other two buffers of phosphate-buffered saline (PBS) and NaCl. The lowest value of DNA extraction efficiency at 8125 ng/μL of 58.845% with absorbance ratios of A260/A280 (1.818) for PBS was also observed. The study has concluded an enhancement in the DNA extraction efficiency when the polymer is in the composite stage along with cellulose and magnetite particles as compared against the bare polymer

    DNA adsorption studies of poly (4,4′-cychlohexylidene bisphenol oxalate)/silica nanocomposites

    Get PDF
    The present study deals with the synthesis, characterization, and DNA extraction of poly(4,4′-cyclohexylidene bisphenol oxalate)/silica (Si) nanocomposites (NCs). The effects of varying the monomer/Si (3.7%, 7%, and 13%) ratio towards the size and morphology of the resulting NC and its DNA extraction capabilities have also been studied. For the NC synthesis, two different methods were followed, including the direct mixing of poly(4,4′-cyclohexylidene bisphenol oxalate) with fumed Si, and in situ polymerization of the 4,4′-cyclohexylidene bisphenol monomer in the presence of fumed silica (11 nm). The formed NCs were thoroughly investigated by using different techniques such as scanning electron microscopy (SEM), fourier transform infrared (FTIR) spectroscopy, differential scanning calorimetry (DSC), thermogravimetric analysis (TGA), powdered X-ray diffraction (XRD), and Brunauer–Emmett–Teller (BET) analysis where the results supported that there was the successful formation of poly(4,4′-cyclohexylidene bisphenol oxalate)/Si NC. Within the three different NC samples, the one with 13% Si was found to maintain a very high surface area of 12.237 m2/g, as compared to the other two samples consisting of 7% Si (3.362 m2/g) and 3.7% Si (1.788 m2/g). Further, the solid phase DNA extraction studies indicated that the efficiency is strongly influenced by the amount of polymer (0.2 g > 0.1 g > 0.02 g) and the type of binding buffer. Among the three binding buffers tested, the guanidine hydrochloride/EtOH buffer produced the most satisfactory results in terms of yield (1,348,000 ng) and extraction efficiency (3370 ng/mL) as compared to the other two buffers of NaCl (2 M) and phosphate buffered silane. Based on our results, it can be indicated that the developed poly(4,4′-cyclohexylidene bisphenol oxalate)/Si NC can serve as one of the suitable candidates for the extraction of DNA in high amounts as compared to other traditional solid phase approaches

    Estimation de la structure 3D d'un environnement urbain à partir d'un flux vidéo

    No full text
    In computer vision, the 3D structure estimation from 2D images remains a fundamental problem. One of the emergent applications is 3D urban modelling and mapping. Here, we are interested in street-level monocular 3D reconstruction from mobile vehicle. In this particular case, several challenges arise at different stages of the 3D reconstruction pipeline. Mainly, lacking textured areas in urban scenes produces low density reconstructed point cloud. Also, the continuous motion of the vehicle prevents having redundant views of the scene with short feature points lifetime. In this context, we adopt the piecewise planar 3D reconstruction where the planarity assumption overcomes the aforementioned challenges.In this thesis, we introduce several improvements to the 3D structure estimation pipeline. In particular, the planar piecewise scene representation and modelling. First, we propose a novel approach that aims at creating 3D geometry respecting superpixel segmentation, which is a gradient-based boundary probability estimation by fusing colour and flow information using weighted multi-layered model. A pixel-wise weighting is used in the fusion process which takes into account the uncertainty of the computed flow. This method produces non-constrained superpixels in terms of size and shape. For the applications that imply a constrained size superpixels, such as 3D reconstruction from an image sequence, we develop a flow based SLIC method to produce superpixels that are adapted to reconstructed points density for better planar structure fitting. This is achieved by the mean of new distance measure that takes into account an input density map, in addition to the flow and spatial information. To increase the density of the reconstructed point cloud used to performthe planar structure fitting, we propose a new approach that uses several matching methods and dense optical flow. A weighting scheme assigns a learned weight to each reconstructed point to control its impact to fitting the structure relative to the accuracy of the used matching method. Then, a weighted total least square model uses the reconstructed points and learned weights to fit a planar structure with the help of superpixel segmentation of the input image sequence. Moreover, themodel handles the occlusion boundaries between neighbouring scene patches to encourage connectivity and co-planarity to produce more realistic models. The final output is a complete dense visually appealing 3Dmodels. The validity of the proposed approaches has been substantiated by comprehensive experiments and comparisons with state-of-the-art methodsDans le domaine de la vision par ordinateur, l’estimation de la structure d’une scène 3D à partir d’images 2D constitue un problème fondamental. Parmi les applications concernées par cette problématique, nous nous sommes intéressés dans le cadre de cette thèse à la modélisation d’un environnement urbain. Nous nous sommes intéressés à la reconstruction de scènes 3D à partir d’images monoculaires générées par un véhicule en mouvement. Ici, plusieurs défis se posent à travers les différentes étapes de la chaine de traitement inhérente à la reconstruction 3D. L’un de ces défis vient du fait de l’absence de zones suffisamment texturées dans certaines scènes urbaines, d’où une reconstruction 3D (un nuage de points 3D) trop éparse. De plus, du fait du mouvement du véhicule, d’une image à l’autre il n’y a pas toujours un recouvrement suffisant entre différentes vues consécutives d’une même scène. Dans ce contexte, et ce afin de lever les verrous ci-dessus mentionnés, nous proposons d’estimer, de reconstruire, la structure d’une scène 3D par morceaux en se basant sur une hypothèse de planéité. Nous proposons plusieurs améliorations à la chaine de traitement associée à la reconstruction 3D. D’abord, afin de structurer, de représenter, la scène sous la forme d’entités planes nous proposons une nouvelle méthode de reconstruction 3D, basée sur le regroupement de pixels similaires (superpixel segmentation), qui à travers une représentation multi-échelle pondérée fusionne les informations de couleur et de mouvement. Cette méthode est basée sur l’estimation de la probabilité de discontinuités locales aux frontières des régions calculées à partir du gradient (gradientbased boundary probability estimation). Afin de prendre en compte l’incertitude liée à l’estimation du mouvement, une pondération par morceaux est appliquée à chaque pixel en fonction de cette incertitude. Cette méthode génère des regroupements de pixels (superpixels) non contraints en termes de taille et de forme. Pour certaines applications, telle que la reconstruction 3D à partir d’une séquence d’images, des contraintes de taille sont nécessaires. Nous avons donc proposé une méthode qui intègre à l’algorithme SLIC (Simple Linear Iterative Clustering) l’information de mouvement. L’objectif étant d’obtenir une reconstruction 3D plus dense qui estime mieux la structure de la scène. Pour atteindre cet objectif, nous avons aussi introduit une nouvelle distance qui, en complément de l’information de mouvement et de données images, prend en compte la densité du nuage de points. Afin d’augmenter la densité du nuage de points utilisé pour reconstruire la structure de la scène sous la forme de surfaces planes, nous proposons une nouvelle approche qui mixte plusieurs méthodes d’appariement et une méthode de flot optique dense. Cette méthode est basée sur un système de pondération qui attribue un poids pré-calculé par apprentissage à chaque point reconstruit. L’objectif est de contrôler l’impact de ce système de pondération, autrement dit la qualité de la reconstruction, en fonction de la précision de la méthode d’appariement utilisée. Pour atteindre cet objectif, nous avons appliqué un processus des moindres carrés pondérés aux données reconstruites pondérées par les calculés par apprentissage, qui en complément de la segmentation par morceaux de la séquence d’images, permet une meilleure reconstruction de la structure de la scène sous la forme de surfaces planes. Nous avons également proposé un processus de gestion des discontinuités locales aux frontières de régions voisines dues à des occlusions (occlusion boundaries) qui favorise la coplanarité et la connectivité des régions connexes. L’ensemble des modèles proposés permet de générer une reconstruction 3D dense représentative à la réalité de la scène. La pertinence des modèles proposés a été étudiée et comparée à l’état de l’art. Plusieurs expérimentations ont été réalisées afin de démontrer, d’étayer, la validité de notre approch

    Joint Spatio-temporal Depth Features Fusion Framework for 3D Structure Estimation in Urban Environment

    No full text
    International audienceWe present a novel approach to improve 3D structure estimation from an image stream in urban scenes. We consider a particular setup where the camera is installed on a moving vehicle. Applying traditional structure from motion (SfM) technique in this case generates poor estimation of the 3d structure due to several reasons such as texture-less images, small baseline variations and dominant forward camera motion. Our idea is to introduce the monocular depth cues that exist in a single image, and add time constraints on the estimated 3D structure. We assume that our scene is made up of small planar patches which are obtained using over-segmentation method, and our goal is to estimate the 3D positioning for each of these planes. We propose a fusion framework that employs Markov Random Field (MRF) model to integrate both spatial and temporal depth information. An advantage of our model is that it performs well even in the absence of some depth information. Spatial depth information is obtained through a global and local feature extraction method inspired by Saxena et al. [1]. Temporal depth information is obtained via sparse optical flow based structure from motion approach. That allows decreasing the estimation ambiguity by forcing some constraints on camera motion. Finally, we apply a fusion scheme to create unique 3D structure estimatio

    Fusion of Dense Spatial Features and Sparse Temporal Features for 3D Structure Estimation in Urban Scenes

    No full text
    International audienceThe authors present a novel approach to improve three-dimensional (3D) structure estimation from an image stream in urban scenes. The authors consider a particular setup, where the camera is installed on a moving vehicle. Applying traditional structure from motion (SfM) technique in this case generates poor estimation of the 3D structure because of several reasons such as texture-less images, small baseline variations and dominant forward camera motion. The authors idea is to introduce the monocular depth cues that exist in a single image, and add time constraints on the estimated 3D structure. The scene is modelled as a set of small planar patches obtained using over-segmentation, and the goal is to estimate the 3D positioning of these planes. The authors propose a fusion scheme that employs Markov random field model to integrate spatial and temporal depth features. Spatial depth is obtained by learning a set of global and local image features. Temporal depth is obtained via sparse optical flow based SfM approach. That allows decreasing the estimation ambiguity by forcing some constraints on camera motion. Finally, the authors apply a fusion scheme to create unique 3D structure estimatio

    Monocular 3D structure estimation for urban scenes

    No full text
    International audienc

    Replica Update Strategy in Mobile Ad Hoc Networks

    No full text
    International audienceIn mobile ad hoc networks, partitioning occurs frequently. Data replication techniques are used to improve data accessibility but require data consistency to be maintained in case of update. In this paper, we propose hybrid push-pull data update propagation. The idea is to divide replica holders into SH(Push) and LL(Pull) categories. Updates are pushed to SH nodes whenever they occur. LL nodes pull the updates from SH nodes in a frequency suitable for their needs. The novelty of this method that it minimizes communication cost when saving an adapted level –to mobile hosts needs- for data consistenc
    corecore