20 research outputs found

    Artificial Intelligence in Public Health Dentistry

    Get PDF
    The educational needs must drive the development of the appropriate technology”. They should not be viewed as toys for enthusiasts. Nevertheless, the human element must never be dismissed. Scientific research will continue to offer exciting technologies and effective treatments. For the profession and the patients, it serves to benefit fully from modern science, new knowledge and technologies must be incorporated into the mainstream of dental education. The technologies of modern science have astonished and intrigued our imagination. Correct diagnosis is the key to a successful clinical practice. In this regard, adequately trained neural networks can be a boon to diagnosticians, especially in conditions having multifactorial etiology

    Spatial Big Data Analytics of Influenza Epidemic in Vellore, India.

    Get PDF
    The study objective is to develop a big spatial data model to predict the epidemiological impact of influenza in Vellore, India. Large repositories of geospatial and health data provide vital statistics on surveillance and epidemiological metrics, and valuable insight into the spatiotemporal determinants of disease and health. The integration of these big data sources and analytics to assess risk factors and geospatial vulnerability can assist to develop effective prevention and control strategies for influenza epidemics and optimize allocation of limited public health resources. We used the spatial epidemiology data of the HIN1 epidemic collected at the National Informatics Center during 2009-2010 in Vellore. We developed an ecological niche model based on geographically weighted regression for predicting influenza epidemics in Vellore, India during 2013-2014. Data on rainfall, temperature, wind speed, humidity and population are included in the geographically weighted regression analysis. We inferred positive correlations for H1N1 influenza prevalence with rainfall and wind speed, and negative correlations for H1N1 influenza prevalence with temperature and humidity. We evaluated the results of the geographically weighted regression model in predicting the spatial distribution of the influenza epidemic during 2013-2014

    A Business Model to Detect Disease Outbreaks

    Get PDF
    Introduction: Every year several disease outbreaks, such as influenza-like illnesses (ILI) and other contagious illnesses, impose various costs to public and non-government agencies. Most of these expenses are due to not being ready to handle such disease outbreaks. An appropriate preparation will reduce the expenses. A system that is able to recognize these outbreaks can earn income in two ways: first, selling the predictions to government agencies to equip and make preparations in order to reduce the imposed costs and second, selling predictions to pharmaceutical companies to guide them in producing the required drugs when a disease spreads. This production can specify probable markets to these companies. Methods: Both earning methods would be considered in this modeling and costs and incomes will be discussed according to basic business models (especially in the health field). To execute this model, the internet is used as a recipient of information from the doctors and the service providers for prediction. To ensure collaboration of doctors in the data collection process, the amount of money that is paid is proportional to the rate of sending the patients’ information. On the other hand, customers can access outbreak prediction information about a specific illness after payment or subscription of system for monthly periods. All the money transfered in this system would be via online credit systems. Results: This business model has three main values: recognizing disease outbreaks at the right time, identifying factors and estimating the spreading rate of the disease and, the categorization of customers in this model is based on the value provided including pharmaceutical companies and importers of drugs, the government, insurance companies, universities and research centers. By considering various markets, this model has the ROI of 0.5 which means the investment in it reverses in 6 months. Conclusion: According to the results, the business model developed in this study, has fair value and is feasible and suitable for the web. This model develops medical information network and proper marketing, earns good profits and the most critical resource of it is the algorithm that detects the disease outbreak which must be properly constructed and used

    A Bayesian Outbreak Detection Method for Influenza-Like Illness

    Get PDF

    Bayesian prediction of an epidemic curve

    Get PDF
    AbstractAn epidemic curve is a graph in which the number of new cases of an outbreak disease is plotted against time. Epidemic curves are ordinarily constructed after the disease outbreak is over. However, a good estimate of the epidemic curve early in an outbreak would be invaluable to health care officials. Currently, techniques for predicting the severity of an outbreak are very limited. As far as predicting the number of future cases, ordinarily epidemiologists simply make an educated guess as to how many people might become affected. We develop a model for estimating an epidemic curve early in an outbreak, and we show results of experiments testing its accuracy

    When Gossip is Good: Distributed Probabilistic Inference for Detection of Slow Network Intrusions

    Get PDF
    Abstract Intrusion attempts due to self-propagating code are becoming an increasingly urgent problem, in part due to the homogeneous makeup of the internet. Recent advances in anomalybased intrusion detection systems (IDSs) have made use of the quickly spreading nature of these attacks to identify them with high sensitivity and at low false positive (FP) rates. However, slowly propagating attacks are much more difficult to detect because they are cloaked under the veil of normal network traffic, yet can be just as dangerous due to their exponential spread pattern. We extend the idea of using collaborative IDSs to corroborate the likelihood of attack by imbuing end hosts with probabilistic graphical models and using random messaging to gossip state among peer detectors. We show that such a system is able to boost a weak anomaly detector D to detect an order-of-magnitude slower worm, at false positive rates less than a few per week, than would be possible using D alone at the end-host or on a network aggregation point. We show that this general architecture is scalable in the sense that a fixed absolute false positive rate can be achieved as the network size grows, spreads communication bandwidth uniformly throughout the network, and makes use of the increased computation power of a distributed system. We argue that using probabilistic models provides more robust detections than previous collaborative counting schemes and allows the system to account for heterogeneous detectors in a principled fashion. Intrusion Detection Worms pose an increasingly serious threat to network security. With known worms estimated at reaching peak speeds of 23K connections per second, and theoretical analysis citing higher speeds, the entire Internet risks infection within tens of minutes As the methods to detect worms become increasingly sophisticated, the worm designers react by making worms harder to detect and stop. Worms released over the past year have tended to the extremes: getting either much faster to allow rapid spread, or much slower to prevent detection. The latter approach places an increasing burden on detection methods to effectively pick out and isolate worm traffic from the baseline created by normal traffic seen at a host. While the slower rate does offer some respite to the network operator(s) (if detected, the worms can be contained with relatively little collateral damage), the detection is extremely challenging due to the fact that slow worms can hide under the veil of normal traffic. Although locally a worm may be propagating very slowly, if it can manage to reproduce more than once before being detected on a local host, it will still grow at an exponential rate. Yet another challenge in dealing with worms is that individual entities can only see a partial picture of the larger network wide behavior of the worm(s). IDSs deployed in select networks might not see any worm traffic for a long time and perhaps see it only when it is too late. Collaboration is seen as a way to remedy this; systems that allow multiple IDSs to share information have been shown to provide greater "coverage" in detection In this paper, we describe an approach to host-based IDS using distributed probabilistic inference. The starting point in our work is a set of weak host-based IDSs, referred to as local detectors (LDs), distributed throughout the network. We allow the hosts to collaborate and combine their weak information in a novel way to mitigate the effect of the high false positive rate. LDs raise alarms at a relatively high frequency whenever they detect even a remotely plausible anomaly. Alarms spreading in the network are aggregated by global detectors (GDs) to determine if the network, as a whole, is in an anomalous state, e.g., under attack. A similar system of distributed Bayesian network-based intrusion detection is presented by Our main contribution in this paper is a probabilistic framework that aggregates (local) beliefs to perform network-wide inference. Our primary findings are: • We can detect an order-of-magnitude slower worm than could be detected by using LDs alone at a FP rate of one per week. 2 • Our framework shows good scalability properties in the sense that we achieve a fixed false positive rate for the system, independent of the network size. • Our probabilistic model outperforms previous collaborative counting schemes and allows the system to account for heterogeneous detectors in a principled fashion. While the methods we describe are quite general and applicable in a wide variety of network settings, our empirical results operate over a subset of the Intel enterprise network. In the following sections, we describe the architecture of our system, discussing the advantages and disadvantages of the many design points, and we present empirical results that 1 For instance, a system may employ a range of detectors, or some detectors may be more trusted than others. 2 By contrast, the Intel network operations center typically investigates 2-3 false positives each day. demonstrate several of the advantages to the system we propose. Architectural Model In answer to the challenges posed in the previous section, we propose a system composed of three primary subcomponents, shown in The LDs live at the end-hosts and are designed to be weak but general classifiers which collect information and make "noisy" conclusions about anomalies at the host-level. This design serves several purposes: 1. Analysis of network traffic at the host level compares the weak signal to a much smaller background noise-level, so can boost the signal-to-noise ratio by orders of magnitude compared to an IDS that operates within the network. 2. Host-based detectors can make use of a richer set of data, possibly using application data from the host as input into the local classifier. 3. This system adds computational power to the detection problem by massively distributing computations across the end-hosts. An important design decision of this system is where to place the GDs in the network, and this decision goes hand-in-hand with the design of the ISS. There are at least two possibilities: centralized placement The ISS uses a network protocol to communicate state between the LDs and the GDs. For the purposes of this paper we assume that each LD communicates its state by beaconing to a random set of GDs at regularly spaced epochs. There are many important and interesting research questions about what an ideal ISS should look like. For example, messages could be aggregated from host-to-host to allow exponential spreading of information. However it is beyond the scope of this paper to deal with this issue in depth. Here we assume that no message aggregation is taking place, each LD relays its own state to M hosts, chosen at random each epoch. In the following sections, we examine in detail the LDs and GDs used by our system. The Local Detectors For the purposes of this paper, we define a LD as a binary classifier that sits on the local host and uses information about the local state or local traffic patterns to classify the state of the host as normal or abnormal. We assume that the LDs are weak in the sense that they may have a high falsepositive rate, but are general, so are likely to fire for a broad range of anomalous behavior. In the context of intrusiondetection systems, because of the high volume of traffic in modern networks, what may appear to be a relatively small FP rate could by itself result in an unacceptable level of interruptions, so could be classified as weak. The LD implementation we use in this paper is a heuristicbased detector that analyzes outgoing traffic and counts the number of new outgoing connections to unique destination addresses and outgoing ports; alerts are raised when this number crosses a configured threshold. Background traffic connection rate distributio
    corecore