69 research outputs found

    ANALISIS SIKAP MATEMATIS-BIOLOGIS DAN KEMAMPUAN LITERASI KUANTITATIF SISWA PADA PEMBELAJARAN PENCEMARAN AIR BERBASIS LITERASI KUANTITATIF MENGGUNAKAN MBVI

    Get PDF
    Penelitian ini berjudul analisis sikap matematis-biologis dan kemampuan literasi kuantitatif siswa pada pembelajaran pencemaran air berbasis literasi kuantitatif menggunakan MBVI. Tujuan dari penelitian ini adalah untuk menggambarkan analisis sikap matematis-biologis siswa setelah pembelajaran pencemaran air berbasis literasi kuantitatif. Penelitian ini dilakukan dengan pendekatan kuantitatif menggunakan metode quasi experimental. Desain penelitian yang digunakan adalah non-equivalent control group design. Penelitian ini melibatkan 207 siswa SMA kelas X yang ditentukan secara purposif sebagai sampel. Math-Biology Values Instrument (MBVI) disajikan dalam bentuk angket untuk mengukur sikap matematis biologis siswa serta instrumen soal literasi kuantitatif untuk mengukur kemampuan literasi kuantitatif. Temuan sikap matematis-biologis siswa setelah pembelajaran pencemaran air berbasis literasi kuantitatif menunjukkan hasil yang lebih tinggi dibanding kelas kontrol. Hasil uji korelasi menunjukkan sikap matematis-biologis siswa berkolerasi signifikan (ρ < 0,05) dengan kemampuan literasi kuantitatif siswa. Hasil uji regresi menunjukan kemampuan literasi kuantitatif siswa berkontribusi positif terhadap sikap matematis-biologis siswa; This research is titled analysis mathematical-biological attitudes and the ability of quantitative literacy of students on learning water pollution based quantitative literacy using MBVI. The purpose of the study was to analyse the students mathematical-biological attitudes after learning water pollution based quantitative literacy. This research is done with a quantitative approach using the experimental quasi method. The research design used is a non-equivalent control group design. The research involved 207 X-grade high school students which were prescribed purposif as samples. The Math-Biology Values Instrument (MBVI) is presented in a questionnaire to measure the students mathematical-biological attitudes as well as the instrument of quantitative literacy to measure quantitative literacy ability. Findings of the students mathematical-biological attitudes following the study of water pollution based quantitative literacy show higher results than the control class. The correlation test results show the mathematical attitudes of the students significantly correlated (ρ < 0.05) with the ability of quantitative literacy of students. Regression test results demonstrate the ability of quantitative literacy for students to positively contribute to the student’s mathematical-biological attitudes

    Multiview Registration via Graph Diffusion of Dual Quaternions.

    Get PDF
    Surface registration is a fundamental step in the reconstruction of three-dimensional objects. While there are several fast and reliable methods to align two surfaces, the tools available to align multiple surfaces are relatively limited. In this paper we propose a novel multiview registration algorithm that projects several pairwise alignments onto a common reference frame. The projection is performed by representing the motions as dual quaternions, an algebraic structure that is related to the group of 3D rigid transformations, and by performing a diffusion along the graph of adjacent (i.e., pairwise alignable) views. The approach allows for a completely generic topology with which the pair-wise motions are diffused. An extensive set of experiments shows that the proposed approach is both orders of magnitude faster than the state of the art, and more robust to extreme positional noise and outliers. The dramatic speedup of the approach allows it to be alternated with pairwise alignment resulting in a smoother energy profile, reducing the risk of getting stuck at local minima

    Certification of Gaussian Boson Sampling via graphs feature vectors and kernels

    Get PDF
    Gaussian Boson Sampling (GBS) is a non-universal model for quantum computing inspired by the original formulation of the Boson Sampling (BS) problem. Nowadays, it represents a paradigmatic quantum platform to reach the quantum advantage regime in a specific computational model. Indeed, thanks to the implementation in photonics-based processors, the latest GBS experiments have reached a level of complexity where the quantum apparatus has solved the task faster than currently up-to-date classical strategies. In addition, recent studies have identified possible applications beyond the inherent sampling task. In particular, a direct connection between photon counting of a genuine GBS device and the number of perfect matchings in a graph has been established. In this work, we propose to exploit such a connection to benchmark GBS experiments. We interpret the properties of the feature vectors of the graph encoded in the device as a signature of correct sampling from the true input state. Within this framework, two approaches are presented. The first method exploits the distributions of graph feature vectors and classification via neural networks. The second approach investigates the distributions of graph kernels. Our results provide a novel approach to the actual need for tailored algorithms to benchmark large-scale Gaussian Boson Samplers

    Towards precise completion of deformable shapes

    Get PDF
    According to Aristotle, “the whole is greater than the sum of its parts”. This statement was adopted to explain human perception by the Gestalt psychology school of thought in the twentieth century. Here, we claim that when observing a part of an object which was previously acquired as a whole, one could deal with both partial correspondence and shape completion in a holistic manner. More specifically, given the geometry of a full, articulated object in a given pose, as well as a partial scan of the same object in a different pose, we address the new problem of matching the part to the whole while simultaneously reconstructing the new pose from its partial observation. Our approach is data-driven and takes the form of a Siamese autoencoder without the requirement of a consistent vertex labeling at inference time; as such, it can be used on unorganized point clouds as well as on triangle meshes. We demonstrate the practical effectiveness of our model in the applications of single-view deformable shape completion and dense shape correspondence, both on synthetic and real-world geometric data, where we outperform prior work by a large margin

    Robust Camera Calibration using Inaccurate Targets

    Get PDF
    Accurate intrinsic camera calibration is essential to any computer vision task that involves image based measurements. Given its crucial role with respect to precision, a large number of approaches have been proposed over the last decades. Despite this rich literature, steady advancements in imaging hardware regularly push forward the need for even more accurate techniques. Some authors suggest generalizations of the camera model itself, others propose novel designs for calibration targets or different optimization schemas. In this paper we take a completely different route by directly addressing one of the most overlooked problems in practical calibration scenarios. Specifically, we drop the assumption that the target is known with enough precision and we adjust it in an iterative way as part of the whole process. This is in fact the case with the typical target used in most of the calibration literature, which is usually printed on paper and stitched on a flat surface. In the experimental section we show that even with such a cheaply crafted target it is possible to obtain a very accurate camera calibration that outperforms those obtained with well-known standard techniques

    MV-MS-FETE: Multi-view multi-scale feature extractor and transformer encoder for stenosis recognition in echocardiograms

    Get PDF
    Background: aortic stenosis is a common heart valve disease that mainly affects older people in developed countries. Its early detection is crucial to prevent the irreversible disease progression and, eventually, death. A typical screening technique to detect stenosis uses echocardiograms; however, variations introduced by other tissues, camera movements, and uneven lighting can hamper the visual inspection, leading to misdiagnosis. To address these issues, effective solutions involve employing deep learning algorithms to assist clinicians in detecting and classifying stenosis by developing models that can predict this pathology from single heart views. Although promising, the visual information conveyed by a single image may not be sufficient for an accurate diagnosis, especially when using an automatic system; thus, this indicates that different solutions should be explored. Methodology: following this rationale, this paper proposes a novel deep learning architecture, composed of a multi-view, multi-scale feature extractor, and a transformer encoder (MV-MS-FETE) to predict stenosis from parasternal long and short-axis views. In particular, starting from the latter, the designed model extracts relevant features at multiple scales along its feature extractor component and takes advantage of a transformer encoder to perform the final classification. Results: experiments were performed on the recently released Tufts medical echocardiogram public dataset, which comprises 27,788 images split into training, validation, and test sets. Due to the recent release of this collection, tests were also conducted on several state-of-the-art models to create multi-view and single-view benchmarks. For all models, standard classification metrics were computed (e.g., precision, F1-score). The obtained results show that the proposed approach outperforms other multi-view methods in terms of accuracy and F1-score and has more stable performance throughout the training procedure. Furthermore, the experiments also highlight that multi-view methods generally perform better than their single-view counterparts. Conclusion: this paper introduces a novel multi-view and multi-scale model for aortic stenosis recognition, as well as three benchmarks to evaluate it, effectively providing multi-view and single-view comparisons that fully highlight the model's effectiveness in aiding clinicians in performing diagnoses while also producing several baselines for the aortic stenosis recognition task

    Sampling Relevant Points for Surface Registration

    Get PDF
    Surface registration is a fundamental step in the reconstruction of three-dimensional objects. This is typically a two-step process where an initial coarse motion estimation is followed by a refinement step that almost invariably is some variant of Iterative Closest Point (ICP), which iteratively minimizes a distance function measured between pairs of selected neighboring points. The selection of relevant points on one surface to match against points on the other surface is an important issue in any efficient implementation of ICP, with strong implications both on the convergence speed and on the quality of the final alignment. This is due to the fact that typically on a surface there are a lot of low-curvature points that scarcely constrain the rigid transformation and an order of magnitude less descriptive points that are more relevant for finding the correct alignment. This results in a tendency of surfaces to "overfit" noise on low-curvature areas sliding away from the correct alignment. In this paper we propose a novel relevant-point sampling approach for ICP based on the idea that points in an area of great change constrain the transformation more and thus should be sampled with higher frequency. Experimental evaluations confront the alignment accuracy obtained with the proposed approach with those obtained with the commonly adopted uniform subsampling and normal-space sampling strategies. © 2011 IEEE

    A Non-Cooperative Game for 3D Object Recognition in Cluttered Scenes

    Get PDF
    During the last few years a wide range of algorithms and devices have been made available to easily acquire range images. To this extent, the increasing abundance of depth data boosts the need for reliable and unsupervised analysis techniques, spanning from part registration to automated segmentation. In this context, we focus on the recognition of known objects in cluttered and incomplete 3D scans. Fitting a model to a scene is a very important task in many scenarios such as industrial inspection, scene understanding and even gaming. For this reason, this problem has been extensively tackled in literature. Nevertheless, while many descriptor-based approaches have been proposed, a number of hurdles still hinder the use of global techniques. In this paper we try to offer a different perspective on the topic. Specifically, we adopt an evolutionary selection algorithm in order to extend the scope of local descriptors to satisfy global pairwise constraints. In addition, the very same technique is also used to shift from an initial sparse correspondence to a dense matching. This leads to a novel pipeline for 3D object recognition, which is validated with an extensive set of experiments and comparisons with recent well-known feature-based approaches. © 2011 IEEE

    Robust Figure Extraction on Textured Background: A Game-Theoretic Approach

    Get PDF
    Feature-based image matching relies on the assumption that the features contained in the model are distinctive enough. When both model and data present a sizeable amount of clutter, the signal-to-noise ratio falls and the detection becomes more challenging. If such clutter exhibits a coherent structure, as it is the case for textured background, matching becomes even harder. In fact, the large amount of repeatable features extracted from the texture dims the strength of the relatively few interesting points of the object itself. In this paper we introduce a game-theoretic approach that allows to distinguish foreground features from background ones. In addition the same technique can be used to deal with the object matching itself. The whole procedure is validated by applying it to a practical scenario and by comparing it with a standard point-pattern matching technique. © 2010 IEEE

    Can a Fully Unconstrained Imaging Model Be Applied Effectively to Central Cameras?

    Get PDF
    Traditional camera models are often the result of a compromise between the ability to account for non-linearities in the image formation model and the need for a feasible number of degrees of freedom in the estimation process. These considerations led to the definition of several ad hoc models that best adapt to different imaging devices, ranging from pinhole cameras with no radial distortion to the more complex catadioptric or polydioptric optics. In this paper we propose the use of an unconstrained model even in standard central camera settings dominated by the pinhole model, and introduce a novel calibration approach that can deal effectively with the huge number of free parameters associated with it, resulting in a higher precision calibration than what is possible with the standard pinhole model with correction for radial distortion. This effectively extends the use of general models to settings that traditionally have been ruled by parametric approaches out of practical considerations. The benefit of such an unconstrained model to quasi-pinhole central cameras is supported by an extensive experimental validation
    corecore