During the last century, the new, exciting field of Artificial Intelligence has risen. With its promises, fears for the uncertain future development of this area started to rise. Is it going to be sentient? Is it going to be smarter than us? Is it going to "understand" our uselessness? Is it going to decide that we are no more fundamental? And consequently decide to end our species?
These and more questions emerged, nourished by the apprehension of the possible dangers that we would need to face.
A number of possible solutions were proposed, to try to limit these yet-to-exist new beings. But none of them got universally accepted, thus leaving this primordial fright unscathed.
In this paper a novel point of view is considered where, instead of limiting and preventing the possible worst outcomes, an analysis of the human psyche is performed. This work of introspection grants us the great power of copying: once we understand how our psyche is divided - or "stratified" - and how we can, socially speaking, live together and coexist, we are able to try and replicate this abstract structure inside the "mind" of an hypothetical new artificial being. Using - and possibly improving - this set of abstract mental tools and intrinsic barriers for the creation of a new being should, hypothetically, remove the chance - and the fear - of a future repercussion from its original root