215 research outputs found

    Deep Neural Networks and Data for Automated Driving

    Get PDF
    This open access book brings together the latest developments from industry and research on automated driving and artificial intelligence. Environment perception for highly automated driving heavily employs deep neural networks, facing many challenges. How much data do we need for training and testing? How to use synthetic data to save labeling costs for training? How do we increase robustness and decrease memory usage? For inevitably poor conditions: How do we know that the network is uncertain about its decisions? Can we understand a bit more about what actually happens inside neural networks? This leads to a very practical problem particularly for DNNs employed in automated driving: What are useful validation techniques and how about safety? This book unites the views from both academia and industry, where computer vision and machine learning meet environment perception for highly automated driving. Naturally, aspects of data, robustness, uncertainty quantification, and, last but not least, safety are at the core of it. This book is unique: In its first part, an extended survey of all the relevant aspects is provided. The second part contains the detailed technical elaboration of the various questions mentioned above

    Volume X, No. 3

    Get PDF
    Atkinson, Norman. “Philosophy for Children Comes to Africa.” 13­-14. Cresswell, Roger. “Spreading Thoughts.” 29-­34. Halloran, Dorothy. “Thinking Skills Program in Lithuania.” 15. Jones, Beau Fly. “Report from North America: A Brief Overview of Cognitive Design Strategies.” 35­-36. Kennedy, David. “Why Philosophy for Children Now 7.” 9­-6. Lipman, Matthew. “Proceedings of the 1973 Conference on Pre­-College Philosophy.” 37­-41. Matthews, Gareth. “Thinking in Stories: ‘The Cat Who Thought She Was a Dog and the Dog Who Thought He Was a Cat,’ by Isaac Beshavis Singer. 1. Morehouse, Mort. “Philosophy for Children: Curriculum and Practice.” 7-­12. Wieder, Charles G. “Children Around the World” and “Drawings.” 44-­43. Woolcock, Peter G. “Skills­-Grouping as a Teaching Approach to the Philosophy for Children Program.” 23­-28

    Poisson-Binomial counting for learning prediction sparsity

    Get PDF
    The loss function is an integral component of any successful deep neural network training; it guides the optimization process by reducing all aspects of a model into a single number that must best capture the overall objective of the learning. Recently, the maximum-likelihood parameter estimation principle has grown to become the default framework for selecting loss functions, hence resulting in the prevalence of the cross-entropy for classification and the mean-squared error for regression applications (Goodfellow et al., 2016). Loss functions can however be tailored further to convey prior knowledge about the task or the dataset at hand to the training process (e.g., class imbalances (Huang et al., 2016a; Cui et al., 2019), perceptual consistency (Reed et al., 2014), and attribute awareness (Jiang et al., 2019)). Overall, by designing loss functions that account for known priors, a more targeted supervision can be achieved with often improved performance. In this work, we focus on the ubiquitous prior of prediction sparsity, which underlines many applications that involve probability estimation. More precisely, while the iterative nature of gradient descent learning often requires models to be able to continuously reach any probability estimates between 0 and 1 during training, the optimal solution to the optimization problem (w.r.t. the groundtruth) is often sparse with clear-cut probabilities (i.e., either converging towards 1 or 0). For instance, in object detection, the decision that must be made by the models to either keep or discard estimated bounding-boxes for final predictions (e.g., non-maximum suppression) is binary. Similarly, in music onset detection, the optimal predictions are sparse: it is known that only a few points in time should be assigned a high likelihood, while no probability mass should be allocated to all other timesteps. In these applications, incorporating this important prior directly in the training process through the design of the loss function would offer a more tailored supervision, that better captures the underlying objective. To that effect, this work introduces a novel loss function that relies on instance counting to achieve prediction sparsity. More precisely, as shown in the theoretical part of this work, modeling occurrence counts as a Poisson-binomial distribution results in a differentiable training objective that has the unique intrinsic ability to converge probability estimates towards sparsity. In this setting, sparsity is thus not attained through an explicit sparsity-inducing operation, but is rather implicitly learned by the model as a byproduct of learning to count instances. We demonstrate that this cost function can be leveraged as a standalone loss function (e.g., for the weakly-supervised learning of temporal localization) as well as a sparsity regularization in conjunction with other more targeted loss functions to enforce sparsity constraints in an end-to-end fashion. By design, the proposed approach finds use in the many applications where the optimal predictions are known to be sparse. We thus prove the validity of the loss function on a wide array of tasks including weakly-supervised drum detection, piano onset detection, single-molecule localization microscopy, and robust event detection in videos or in wearable sensors time series. Overall, the experiments conducted in this work not only highlight the effectiveness and the relevance of Poisson-binomial counting as a means of supervision, but also demonstrate that integrating prediction sparsity directly in the learning process can have a significant impact on generalization capability, noise robustness, and detection accuracy

    Cyber Security

    Get PDF
    This open access book constitutes the refereed proceedings of the 18th China Annual Conference on Cyber Security, CNCERT 2022, held in Beijing, China, in August 2022. The 17 papers presented were carefully reviewed and selected from 64 submissions. The papers are organized according to the following topical sections: ​​data security; anomaly detection; cryptocurrency; information security; vulnerabilities; mobile internet; threat intelligence; text recognition

    Introduction: Ways of Machine Seeing

    Get PDF
    How do machines, and, in particular, computational technologies, change the way we see the world? This special issue brings together researchers from a wide range of disciplines to explore the entanglement of machines and their ways of seeing from new critical perspectives. This 'editorial' is for a special issue of AI & Society, which includes contributions from: MarĂ­a JesĂșs Schultz Abarca, Peter Bell, Tobias Blanke, Benjamin Bratton, Claudio Celis Bueno, Kate Crawford, Iain Emsley, Abelardo Gil-Fournier, Daniel ChĂĄvez Heras, Vladan Joler, Nicolas MalevĂ©, Lev Manovich, Nicholas Mirzoeff, Perle MĂžhl, Bruno Moreschi, Fabian Offert, Trevor Paglan, Jussi Parikka, Luciana Parisi, Matteo Pasquinelli, Gabriel Pereira, Carloalberto Treccani, Rebecca Uliasz, and Manuel van der Veen

    BEING PROFILED:COGITAS ERGO SUM

    Get PDF
    Profiling the European citizen: why today's democracy needs to look harder at the negative potential of new technology than at its positive potential

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field
    • 

    corecore