7 research outputs found

    Flexible Transaction Dependencies in Database Systems

    Full text link
    Numerous extended transaction models have been proposed in the literature to overcome the limitations of the traditional transaction model for advanced applications characterized by their long durations, cooperation between activities and access to multiple databases (like CAD/CAM and office automation). However, most of these extended models have been proposed with specific applications in mind and almost always fail to support applications with slightly different requirements.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/44828/1/10619_2004_Article_270346.pd

    Dynamic Management of Distributed Machine Learning Problems

    Get PDF
    Machine Learning (ML) eInteligĂȘncia Artificial(IA )sĂŁo dois termos intimamente relacionados. A InteligĂȘncia Artificial Ă© uma disciplina que busca criar mĂĄquinas que tenham a capacidade de imitar as habilidades cognitivas humanas, como aprendizagem, raciocĂ­nio, perceção, e tomada de decisĂŁo. Machine Learning Ă© uma das tĂ©cnicas de IA que permite Ă s mĂĄquinas aprenderem a partir de dados sem serem explicitamente programa das. O crescimento exponencial dos dados nas Ășltimas dĂ©cadas tem sido um dos principais fatores impulsionadores do avanço da InteligĂȘncia Artificial e de MachineLearning. As empresas e organizaçÔes recolhem dados em volumes cada vez maiores, incluindo informaçÔes de transaçÔes financeiras, registos mĂ©dicos, dados de sensoresIoTemuitomais.EssesdadossĂŁocruciaisparaimpulsionarainovaçãoeo progresso, mas podem ser muito complexos e difĂ­ceis de ser em analisados manualmente. É aqui que entram MachineLearning, que permite que as mĂĄquinas aprendam e automatizem a anĂĄlise de grandes conjuntos de dados.Machine Learning(ML)andArtificialIntelligence(AI)aretwocloselyrelatedterms.ArtificialIn- telligence isadisciplinethatseekstocreatemachinesthathavetheabilitytomimichumancognitive skills, suchaslearning,reasoning,perception,anddecisionmaking.MachineLearningisoneoftheAI techniques thatallowsmachinestolearnfromdatawithoutbeingexplicitlyprogrammed. The exponentialgrowthofdatainrecentdecadeshasbeenoneofthemaindrivingfactorsofAIand Machine Learningadvancement.Companiesandorganizationscollectdatainincreasinglylargevolumes, including financialtransactioninformation,medicalrecords,IoTsensordata,andmore.Thisdatais crucial fordrivinginnovationandprogress,butcanbetoocomplexanddifficulttoanalyzemanually. This iswhereMachineLearningcomesin,allowingmachinestolearnandautomatetheanalysisof largedatasets.Thisapproachreducesthetimeandeffortrequiredtoperformcomplexanalyses,aswell as providingvaluableinsightsthatcanbeusedtoimprovebusinessoperations,increaseefficiency,and makemoreinformeddecisions. As datacontinuestogrowinsizeandcomplexity,newapproachesandsystemsareneededto handle itefficiently.OnewaythisisbeingdoneisthroughthedevelopmentofmoreadvancedMachine Learning techniques,suchasdeepneuralnetworksandreinforcementlearningalgorithms,whichcan more effectivelyhandlelargerandmorecomplexdatasets.Inaddition,theuseoftechnologiessuchas cloud computinganddistributeddataprocessingcanalsohelpreducetheconsumptionofcomputational resources andmakedataanalysismorescalable. Thus, theproposedsolutionarisestoaddresssomeofthechallengesthathaveemergedwiththe increase indatavolume.AdistributedmachinelearningsystemthatrunsonaHadoopclusterand takesadvantageofreplication,balancing,andblockdistributioncapabilities.Itallowsmodelstobe trained inadistributedmannerfollowingtheprincipleofdatalocality,beingabletochangepartsof the modelthroughanoptimizationmodule,thusenablingthemodeltoevolveovertimeasnewdataarrive

    Correct Web Service Transactions in the Presence of Malicious and Misbehaving Transactions

    Get PDF
    Concurrent database transactions within a web service environment can cause a variety of problems without the proper concurrency control mechanisms in place. A few of these problems involve data integrity issues, deadlock, and efficiency issues. Even with today’s industry standard solutions to these problems, they have taken a reactive approach rather than proactively preventing these problems from happening. We deliver a solution, based on prediction-based scheduling to ensure consistency while keeping execution time the same or faster than current industry solutions. The first part of this solution involves prototyping and formally proving a prediction-based scheduler. The prediction-based scheduler leverages a prediction-based metric that promotes transactions with a high performance metric. This performance metric is based on the transaction’s likelihood to commit and its efficiency within the system. We can then predict the outcome of the transaction based on the metric and apply customized lock behaviors to address consistency issues in current web service environments. We have formally proven that the solution will increase consistency among web service transactions without a performance degradation. The simulation was developed using a multi-threaded approach to simulate concurrent transactions. Our empirical results show that the solution performs similarly to industry solutions with the added benefit of ensured consistency. This work has been published in IEEE Transactions on Services Computing. The second part of the solution involves building the prediction-based metric mentioned previously. In the initial solution we assumed that the categorization of transactions is provided in advance. To incorporate the ability to dynamically adjust transaction reputations we extended the four category solution to a dynamic reputation score. The attributes used in the reputation score are system abort ranking, user abort ranking, efficiency ranking, and commit ranking. With these four attributes we were able to establish a dynamic dominance structure that allowed for a transaction to promote or demote itself based on its performance within the system. This work has been submitted to ACM Transactions on Database Systems and awaiting review. Both phases provide a complete solution of prediction-based transaction schedul-ing that provides dynamic categorization no matter the transactional environment. Future work of this system would involve extending the prediction-based solution to a multi-level secure database with an added dimension. Our goal is to increase concurrency of multi-level secure transactions without creating a covert channel. The dimension provides a security classification in addition to attributes for dynamic reputation that allows for transactions to establish dominance. Our reputation score would provide a cover story for timing differences of transactions of different security levels to allow for a more robust scheduling algorithm. This would allow for high security transactions to gain priority over low security transactions without creating a covert timing channel

    An Adaptive Policy for Improved Timeliness in Secure Database Systems

    No full text
    Database systems for real-time applications must satisfy timing constraints associated with transactions, in addition to maintaining data consistency. In addition to real-time requirements, security is usually required in many applications. Multilevel security requirements introduce a new dimension to transaction processing in real-time database systems. In this paper, we argue that because of the complexities involved, trade-offs need to be made between security and timeliness. We first describe a secure two-phase locking protocol. The protocol is then modified to support an adaptive method of trading off security for timeliness, depending on the current state of the system. The performance of the Adaptive 2PL protocol is evaluated for a spectrum of security-factor values ranging from fully secure (1.0) right upto fully real-time (0.0)

    Integrating Security and Real-Time Requirements Using Covert Channel Capacity

    No full text
    Database systems for real-time applications must satisfy timing constraints associated with transactions in addition to maintaining data consistency. In addition to real-time requirements, security is usually required in many applications. Multilevel security requirements introduce a new dimension to transaction processing in real-time database systems. In this paper, we argue that, due to the conflicting goals of each requirement, trade-offs need to be made between security and timeliness. We first define mutual information, a measure of the degree to which security is being satisfied by a system. A secure two-phase locking protocol is then described and a scheme is proposed to allow partial violations of security for improved timeliness. Analytical expressions for the mutual information of the resultant covert channel are derived and a feedback control scheme is proposed that does not allow the mutual information to exceed a specified upper bound. Results showing the efficacy of the scheme obtained through simulation experiments are also discussed
    corecore