149,496 research outputs found
Role based behavior analysis
Tese de mestrado, Segurança Informática, Universidade de Lisboa, Faculdade de CiĂŞncias, 2009Nos nossos dias, o sucesso de uma empresa depende da sua agilidade e capacidade de se adaptar a condições que se alteram rapidamente. Dois requisitos para esse sucesso sĂŁo trabalhadores proactivos e uma infra-estrutura ágil de Tecnologias de InformacĂŁo/Sistemas de Informação (TI/SI) que os consiga suportar. No entanto, isto nem sempre sucede. Os requisitos dos utilizadores ao nĂvel da rede podem nao ser completamente conhecidos, o que causa atrasos nas mudanças de local e reorganizações. AlĂ©m disso, se nĂŁo houver um conhecimento preciso dos requisitos, a infraestrutura de TI/SI poderá ser utilizada de forma ineficiente, com excessos em algumas áreas e deficiĂŞncias noutras. Finalmente, incentivar a proactividade nĂŁo implica acesso completo e sem restrições, uma vez que pode deixar os sistemas vulneráveis a ameaças externas e internas. O objectivo do trabalho descrito nesta tese Ă© desenvolver um sistema que consiga caracterizar o comportamento dos utilizadores do ponto de vista da rede. Propomos uma arquitectura de sistema modular para extrair informação de fluxos de rede etiquetados. O processo Ă© iniciado com a criação de perfis de utilizador a partir da sua informação de fluxos de rede. Depois, perfis com caracterĂsticas semelhantes sĂŁo agrupados automaticamente, originando perfis de grupo. Finalmente, os perfis individuais sĂŁo comprados com os perfis de grupo, e os que diferem significativamente sĂŁo marcados como anomalias para análise detalhada posterior. Considerando esta arquitectura, propomos um modelo para descrever o comportamento de rede dos utilizadores e dos grupos. Propomos ainda mĂ©todos de visualização que permitem inspeccionar rapidamente toda a informação contida no modelo. O sistema e modelo foram avaliados utilizando um conjunto de dados reais obtidos de um operador de telecomunicações. Os resultados confirmam que os grupos projectam com precisĂŁo comportamento semelhante. AlĂ©m disso, as anomalias foram as esperadas, considerando a população subjacente. Com a informação que este sistema consegue extrair dos dados em bruto, as necessidades de rede dos utilizadores podem sem supridas mais eficazmente, os utilizadores suspeitos sĂŁo assinalados para posterior análise, conferindo uma vantagem competitiva a qualquer empresa que use este sistema.In our days, the success of a corporation hinges on its agility and ability to adapt to fast changing conditions. Proactive workers and an agile IT/IS infrastructure that can support them is a requirement for this success. Unfortunately, this is not always the case. The user’s network requirements may not be fully understood, which slows down relocation and reorganization. Also, if there is no grasp on the real requirements, the IT/IS infrastructure may not be efficiently used, with waste in some areas and deficiencies in others. Finally, enabling proactivity does not mean full unrestricted access, since this may leave the systems vulnerable to outsider and insider threats. The purpose of the work described on this thesis is to develop a system that can characterize user network behavior. We propose a modular system architecture to extract information from tagged network flows. The system process begins by creating user profiles from their network flows’ information. Then, similar profiles are automatically grouped into clusters, creating role profiles. Finally, the individual profiles are compared against the roles, and the ones that differ significantly are flagged as anomalies for further inspection. Considering this architecture, we propose a model to describe user and role network behavior. We also propose visualization methods to quickly inspect all the information contained in the model. The system and model were evaluated using a real dataset from a large telecommunications operator. The results confirm that the roles accurately map similar behavior. The anomaly results were also expected, considering the underlying population. With the knowledge that the system can extract from the raw data, the users network needs can be better fulfilled, the anomalous users flagged for inspection, giving an edge in agility for any company that uses it
A general guide to applying machine learning to computer architecture
The resurgence of machine learning since the late 1990s has been enabled by significant advances in computing performance and the growth of big data. The ability of these algorithms to detect complex patterns in data which are extremely difficult to achieve manually, helps to produce effective predictive models. Whilst computer architects have been accelerating the performance of machine learning algorithms with GPUs and custom hardware, there have been few implementations leveraging these algorithms to improve the computer system performance. The work that has been conducted, however, has produced considerably promising results.
The purpose of this paper is to serve as a foundational base and guide to future computer
architecture research seeking to make use of machine learning models for improving system efficiency.
We describe a method that highlights when, why, and how to utilize machine learning
models for improving system performance and provide a relevant example showcasing the effectiveness of applying machine learning in computer architecture. We describe a process of data
generation every execution quantum and parameter engineering. This is followed by a survey of a
set of popular machine learning models. We discuss their strengths and weaknesses and provide
an evaluation of implementations for the purpose of creating a workload performance predictor
for different core types in an x86 processor. The predictions can then be exploited by a scheduler
for heterogeneous processors to improve the system throughput. The algorithms of focus are
stochastic gradient descent based linear regression, decision trees, random forests, artificial neural
networks, and k-nearest neighbors.This work has been supported by the European Research Council (ERC) Advanced Grant RoMoL (Grant Agreemnt 321253) and by the Spanish Ministry of Science and Innovation (contract TIN 2015-65316P).Peer ReviewedPostprint (published version
PynPoint: a modular pipeline architecture for processing and analysis of high-contrast imaging data
The direct detection and characterization of planetary and substellar
companions at small angular separations is a rapidly advancing field. Dedicated
high-contrast imaging instruments deliver unprecedented sensitivity, enabling
detailed insights into the atmospheres of young low-mass companions. In
addition, improvements in data reduction and PSF subtraction algorithms are
equally relevant for maximizing the scientific yield, both from new and
archival data sets. We aim at developing a generic and modular data reduction
pipeline for processing and analysis of high-contrast imaging data obtained
with pupil-stabilized observations. The package should be scalable and robust
for future implementations and in particular well suitable for the 3-5 micron
wavelength range where typically (ten) thousands of frames have to be processed
and an accurate subtraction of the thermal background emission is critical.
PynPoint is written in Python 2.7 and applies various image processing
techniques, as well as statistical tools for analyzing the data, building on
open-source Python packages. The current version of PynPoint has evolved from
an earlier version that was developed as a PSF subtraction tool based on PCA.
The architecture of PynPoint has been redesigned with the core functionalities
decoupled from the pipeline modules. Modules have been implemented for
dedicated processing and analysis steps, including background subtraction,
frame registration, PSF subtraction, photometric and astrometric measurements,
and estimation of detection limits. The pipeline package enables end-to-end
data reduction of pupil-stabilized data and supports classical dithering and
coronagraphic data sets. As an example, we processed archival VLT/NACO L' and
M' data of beta Pic b and reassessed the planet's brightness and position with
an MCMC analysis, and we provide a derivation of the photometric error budget.Comment: 16 pages, 9 figures, accepted for publication in A&A, PynPoint is
available at https://github.com/PynPoint/PynPoin
- …