67 research outputs found
On the Use of Efficient Projection Kernels for Motion-Based Visual Saliency Estimation
In this paper, we investigate the potential of a family of efficient filters—the Gray-Code Kernels (GCKs)—for addressing visual saliency estimation with a focus on motion information. Our implementation relies on the use of 3D kernels applied to overlapping blocks of frames and is able to gather meaningful spatio-temporal information with a very light computation. We introduce an attention module that reasons the use of pooling strategies, combined in an unsupervised way to derive a saliency map highlighting the presence of motion in the scene. A coarse segmentation map can also be obtained. In the experimental analysis, we evaluate our method on publicly available datasets and show that it is able to effectively and efficiently identify the portion of the image where the motion is occurring, providing tolerance to a variety of scene conditions and complexities
Smart rogaining for computer science orientation
In this paper, we address the problem of designing new formats of computer science orientation activities to be offered during high school students internships in Computer Science Bachelor degrees. In order to cover a wide range of computer science topics as well to deal with soft skills and gender gap issues, we propose a teamwork format, called smart rogaining, that offer engaging introductory activities to prospective students in a series of checkpoints dislocated along the different stages of a rogaine. The format is supported by a smart mobile and web application. Our proposal is aimed at stimulating the interest of participants in different areas of computer science and at improving digital and soft skills of participants and, as a side effect, of staff members (instructors and university students). In the paper, we introduce the proposed format and discuss our experience in the editions organized at the University of Genoa before the COVID-19 pandemic (2019 and 2020 waves)
Personalized therapy for mycophenolate: consensus report by the International Association of Therapeutic Drug Monitoring and Clinical Toxicology
When mycophenolic acid (MPA) was originally marketed for immunosuppressive therapy, fixed doses were recommended by the manufacturer. Awareness of the potential for a more personalized dosing has led to development of methods to estimate MPA area under the curve based on the measurement of drug concentrations in only a few samples. This approach is feasible in the clinical routine and has proven successful in terms of correlation with outcome. However, the search for superior correlates has continued, and numerous studies in search of biomarkers that could better predict the perfect dosage for the individual patient have been published. As it was considered timely for an updated and comprehensive presentation of consensus on the status for personalized treatment with MPA, this report was prepared following an initiative from members of the International Association of Therapeutic Drug Monitoring and Clinical Toxicology (IATDMCT). Topics included are the criteria for analytics, methods to estimate exposure including pharmacometrics, the potential influence of pharmacogenetics, development of biomarkers, and the practical aspects of implementation of target concentration intervention. For selected topics with sufficient evidence, such as the application of limited sampling strategies for MPA area under the curve, graded recommendations on target ranges are presented. To provide a comprehensive review, this report also includes updates on the status of potential biomarkers including those which may be promising but with a low level of evidence. In view of the fact that there are very few new immunosuppressive drugs under development for the transplant field, it is likely that MPA will continue to be prescribed on a large scale in the upcoming years. Discontinuation of therapy due to adverse effects is relatively common, increasing the risk for late rejections, which may contribute to graft loss. Therefore, the continued search for innovative methods to better personalize MPA dosage is warranted.Personalised Therapeutic
Semi-supervised learning of sparse representations to recognize people spatial orientation
In this paper we consider the problem of classifying people spatial orientation with respect to the camera viewpoint from 2D images. Structured multi-class feature selection allows us to control the amount of redundancy of our input data, while semi-supervised learning helps us coping with the intrinsic ambiguity of output labels. We model the multi-class classification problem with an all-pairs strategy based on the use of a coding matrix. A thorough experimental evaluation on the TUD Multiview Pedestrian benchmark dataset demonstrates the superiority of our approach w.r.t. state-of-the-art
Exploring the Use of Efficient Projection Kernels for Motion Saliency Estimation
In this paper we investigate the potential of a family of efficient filters – the Gray-Code Kernels – for addressing visual saliency estimation guided by motion. Our implementation relies on the use of 3D kernels applied to overlapping blocks of frames and is able to gather meaningful spatio-temporal information with a very light computation. We introduce an attention module that reasons on the use of pooling strategies, combined in an unsupervised way to derive a saliency map highlighting the presence of motion in the scene. In the experiments we show that our method is able to effectively and efficiently identify the portion of the image where the motion is occurring, providing tolerance to a variety of scene conditions
Learning common behaviors from large sets of unlabeled temporal series
This paper is about extracting knowledge from large sets of videos, with a particular reference to the video-surveillance application domain. We consider an unsupervised framework and address the specific problem of modeling common behaviors from long-term collection of instantaneous observations. Specifically, such data describe dynamic events and may be represented as time series in an appropriate space of features. Starting off from a set of data meaningful of the common events in a given scenario, the pipeline we propose includes a data abstraction level, that allows us to process different data in a homogeneous way, and a behavior modeling level, based on spectral clustering. At the end of the pipeline we obtain a model of the behaviors which are more frequent in the observed scene, represented by a prototypical behavior, which we call a cluster candidate. We report a detailed experimental evaluation referring to both benchmark datasets and on a complex set of data collected in-house. The experiments show that our method compares very favorably with other approaches from the recent literature. In particular the results we obtain prove that our method is able to capture meaningful information and discard noisy one from very heterogeneous datasets with different levels of prior information available
- …