Web Master!

Help to circulate the new Philosophy

Click Here

Bookmark Us
 
Newsflash

New! New! New!

The God Maker - How God Became God has been published!

Click on the BOOKS section

חדש! חדש! חדש!

בורא האלוהים - כיצד אלוהים הפך להיות אלוהים עכשיו בכל חנויות הספרים 

 

 
Search the Site

Popular Search Words
new philosophy Rational Spirituality Loop Logic Purpose of Creation philosophy of reality General Philosophy Loop of Creation How God Became God education Science definition God reality meaning psychology motion religious religion mind logic philosophy true religion new age rational perception emotion spirituality paradigm metaphysics consciousness

bookcoverholophany-small.jpg 
Our Newsletter







Logic

The Loop Logic

the_loop336x336I recall asking the math teacher in ninth grade, "How do you know that 1 is 1?"

"That's an axiom," he said. "An axiom cannot be proven, and being an axiom, it does not need proof by definition - 1 is 1 because there is an axiomatic agreement with the assertion."

"That's not a satisfactory answer," I said after some thought. I wanted to understand what is beyond axioms. He kicked me out of class for the rest of the year, apparently hoping that would be a satisfactory answer.

Since Euclid, mathematics has been built on axiomatic ground. When we start with something defined and not the process of definition, then we need grounding principles, root assumptions upon which we can elaborate. When we attempt to define an object's "true nature" independent of our involvement in the process of its definition, when we try to understand objective reality unaffected and unbiased by our interaction with it, then we necessarily have to start with an axiom that in essence asserts, "That's how it is. Period."

The structure of the possibility of there being an axiom is full of inconsistencies. Let it suffice to say here that the structure of an axiom is a loop, but its purpose is to avoid a loop by rendering a linear evolution. The dictionary definition of the word "Axiom" is "self-evident truth," which is a self-referential statement, referring back to itself, a loop. An axiom is supposed to be the basic unfaltering foundation on which something can be built, a total beginning facing one direction of evolution, but the problem with that assumption is that another axiom is needed to state that our original axiom is an axiom. That is, the statement that only one straight line passes through two points is not obviously a basic truth, and so, to be an absolute beginning of a process, it has to be stated that this is an axiom. So, even if the statement seems true, that does not mean that it is self-evident by default. Likewise, a statement purporting to describe the "true nature" of a phenomenon is actually an intuitive perception needing reinforcement that it is indeed an objective truth (and therefore impossible to prove). Because it is unprovable, and not needing proof because of the certainty of the intuition that created it, such an axiom is stated in a void. The information provided by the axiom relates to phenomena, but having been stated purely from intuition or observation, it came to light in a logical void. A structure in a void is symmetric; because it is in a void, it has no specific direction or preference. Yet an axiom only has meaning if it has one direction of evolution, being the fundament of further logical conclusions. So the mere idea of a self-evident truth is full of contradictions.

Later, philosophers did question whether we could know the "true nature" of anything objectively, and based upon their observations, they assumed that we could not. They reached the conclusion that a logical structure was needed to replace the intuitive axiomatic statements that described the "true nature" of phenomena, and they attempted to create such a logical structure by defining a final set of axioms wherein each axiom in the set was defined by the others while each axiom on its own was meaningless. The hope was that this set would become the logical fundament of geometry and mathematics, consisting of purely logical statements containing no information whatsoever about phenomena. These attempts failed because no logical structure consisting of empty symbols (p, q, r, etc.) could be built without referring to informative ideas, such as bigger, equal, and so on (in short, the relations connecting the logical objects did not derive from the logical objects within the set). Also, such a structure could not be both totally known and also consistent. These and other such bugs ended the endeavors to build an axiomatic system that could be the true base of an objective reality.

Axioms were utilized as basic assumptions of certain models and theories. They were the hidden bedrock of the search for THE TRUTH, for simplicity in the sense of the term as something single, or the search for the One, which becomes the many. However, explaining Creation with axioms would be endeavoring to achieve an objective explanation of the start of the universe independently of the one doing the explanation. If so employed, axioms would be self-defeating because one could always question what was there before the basic axiom, or how the basic axiom came about. That's probably why there is no axiomatic theory of Creation.

Today science accepts that there might be several right theories or models that could describe Creation, but the basic aspiration to reach totality through finding the common denominator, the ONE TRUTH, points toward the same paradigm, the same framework of linear thinking. That means, if I have many right descriptions, then these are necessarily secondary descriptions and there must be a meta-theory that includes them all. This way of thinking regards the many different right models as members in a class, and the search goes on to define the class of all classes, the theory of everything that describes it all and which starts with something single.

SHET's teaching is not axiomatic. An axiomatic theory requires consistency of its structure. Before the 19th century, consistency was not an issue because axioms delineated indubitable truths, so they could not be inconsistent. It was assumed that two contradictory statements could not be simultaneously true, so if the axioms were true, they could not have lead to contradictory conclusions. But then new geometries were created that were different from and incompatible with the Euclidean axioms. For instance, these new geometries were not confined to spaces that we are familiar with, like Euclidean geometry. Dealing with unfamiliar spaces required tools other than intuition and self-evidence (the ingredients of axioms). This lead to the realization that the axioms of geometry as well as the axioms of any discipline could not and should not be regarded as true, but as postulated assumptions from which theorems could be derived. That's how the truth-value implied by the self-evidence generated by observation of phenomena was turned into abstract logical reasoning not dealing with phenomena, but with the validity of mathematical inference in terms of expressions contained in the postulates. "If 'a,' then 'a'" is such an expression, where ‘a' could be anything and yet it does not have to symbolize anything. Such a discipline is not concerned with whether its conclusions are true or false, but whether its conclusions are in fact the necessary logical consequences of its basic postulates.

The abstractization of mathematics in the 19th century caused a problem the Greeks did not have: how could it be ascertained that an axiomatic system based on certain postulates would not lead to contradictory theorems? In other words, what would assure that the system was internally consistent? When analysis of the truth-value of postulates becomes irrelevant, then other tools are needed for establishing consistency, for if the system is inconsistent, then it loses its validity. The solution found for this problem was to translate the abstract terms into a model, which must be true. A model in this case is something that conforms with intuition and self evidence or is provable. So now we are back at square one, dealing with the truth-value of axioms. The problem with this is that the truth of these models are self-evident, so the abstract set of postulates that constitute a system is only consistent if the models into which they are being translated are consistent. This approach upholds the tradition reflected in the following assertion: if it is true, then it is consistent.

"True" means factual, observable. True is a relation of equivalence with something perceived - which is not a very reliable logical tool of consistency.[1] Although success in proving consistency was achieved by transforming the problem of proof into another domain (for instance, by showing the consistency of Euclidean axioms through an algebraic model), this is not considered an absolute proof because it does not prove that the algebra is consistent. Furthermore, observing any number of true facts in agreement with the axioms is no guarantee that there will be no future fact that might contradict those found earlier, and consequently, ruin the consistency of the system. So any axiomatic system relying on the truth-value of its models for consistency is incomplete because most models are infinite, and consequently, carry insufficient truth-value.

So here we have two serious problems:

1) If the model is infinite, no matter how many elements conform to the consistency of the system, by virtue of being infinite, it is impossible to check them all.

2) Using a model to prove consistency is relative because it assumes the consistency of some other system.

The purpose of axiomatizing a branch of mathematics, or any system, was to achieve the dream of laying down a simple set of propositions from which all the true theorems of that field could be derived. According to this line of reasoning, the set of propositions should be finite, the theorems derived should be consistent with the propositions, and the consistency of such a system should be proved within the system. It was hoped that an absolute proof of consistency would be attained if both the propositions (sentences stating something) and also the logical reasoning (if-then, or, etc., which is the dynamics of inference) could be translated into empty, meaningless signs. "Empty" and "meaningless" meant for them total transparency, or in other words, these signs would only contain their explicit symbolization. If nothing was left implicit, these theoreticians reasoned, then the coveted absolute proof of consistency could be established.

Why was it so important that the abstract expressions should be solely explicit and well defined? How is this demand connected to the proof of consistency? Simply stated, the implicit, hidden relations are hidden because they are indefinite. Indefinite means not defined or indefinable. If logical inference leads to a theorem that is not well defined, then it might be indefinite or undecidable, which means that it could be "a" or "not a" - one of which would stand in contradiction to the system from which it was derived. That is why they sought to express every step along the way to the result and the results themselves as totally definite.

Translating sentences into signs was no problem. To translate the logical reasoning, however, it had to be broken down into well defined relations, such as "identical," "not-identical", etc., which then could be translated into signs on a one-to-one basis. Of course, the purpose of representing the dynamics of logical inference by signs was to exclude the possibility of anything implicit and to only offer explicit expressions. In the language I am using to explain the Holophanic loop, replacing the dynamics with "empty" signs would be trying to turn "Structure" into "Significance". Structure is the dynamics of the process of definition, whereas significance is the defined entity, the stills. This cannot work, because for something to be well defined, it needs to have structure, which must be implicit. To illustrate my claim, imagine that I say, "There is a cup of coffee on the table." The sentence implicitly includes my perception that there was a cup of coffee on the table. If I say, "I perceived that there was a cup of coffee on the table" (trying to include what was implicit before), then this implicitly includes that I perceived my perception that there was a cup of coffee on the table, and so on. My perception then is the process of definition, the dynamic structure that brought about the awareness of there being a cup of coffee on the table.

Stating that SHET's teaching is not axiomatic means that he proposes a logical system consisting of propositions (significance) and logical inference (structure) wherein these are co-dependent, which means that structure only gains meaning when there is a realization (significance) and there can only be significance if it has structure. The initial framework of SHET's logic is the dynamics of paradoxes. Such a system gains consistency if and only if it is based on the structure of a "Paradox" (inconsistency). This new paradigm is elucidated in my book, Holophany, The Loop of Creation.

In Holophany, consistency is demanded from the non-linear dynamics that stabilizes defined entities, whereas conventional logic assesses consistency only between the defined propositions and the inference. Of course, the latter is based on consistency being truth, and truth being well defined (logical) objects so the relation between these objects renders them either true or false.

The 19th century aspiration to consistency was proved untenable by Gödel (1906-1978) in 1931. He demonstrated that complete sets of axioms in number theory would lead to undecidable (and hence, inconsistent) results. The loop logic is essentially different than Gödel's theorem. Including the indefinite yields a language of becoming and dynamics, which are implicitly part of the logical structure and cannot be avoided - so the 19th century dream of consistency a priori had to fail.

This dynamics, the structure, the process of definition is a complex loop of relations that eventually define each other. These are the parameters by which any defined entity gains existence. It could be thought - wrongly - that the complex loop of relations that eventually define each other is like saying that there is no basic word, which is a self-evident truth, on which all other words are built because any word can be defined by other words, whereas these other words can be defined by still other words, and so on until we cover the whole gamut of definitions of all words. So the word we intended to define in the first place is defining those words that define the word we intended to define. This kind of description is not the loop that I am discussing. The portrayal of one word being defined by other words that are defined by other words delineates significance, not the process of definition. Although words define each other, they are all defined, whereas the process of definition deals with defining the indefinite.

In the Holophanic loop every process is generated by other processes, every process is defined by other processes. The totality of this inter-defining activity produces discrete processes that aim at defining the indefinite. Stated differently, this is the process of measurement that utilizes discrete means to measure with, whereas the measured is continuous.

To best illustrate what I mean, think about a living organism: It is alive, dynamic, and it can survive within a certain range of environs. It consists of numerous feedback systems, such as the thyroid metabolism between the thyroid and the pituitary gland, oxygen availability, and red blood cell production, etc. The function of all these systems is to regulate something. To regulate means to provide/allow not too much, not too little, but enough. What "enough" means for each system depends on other systems with which that system is interconnected. For instance, during hypoxia (too little oxygen), the body creates more erythrocytes (red blood cells) that carry hemoglobin that carry oxygen to vital organs. This process occurs to utilize the available oxygen so that life can be sustained. Too many red blood cells will impede the blood flow and too much oxygen will turn into free radicals (too much of which are damaging). So, how does the body regulate its oxygen intake? When the heart recognizes too much oxygen, the myocytes, heart cells, simply release chemicals to contract the blood vessels. When the brain recognizes too much carbon dioxide (the result of too much oxygen), it slows down the automatic breathing urge. Each feedback system is connected to all the others, and all of them together regulate each other.

The keyword here is, regulate. Each system consists of other systems, which consist of other systems, in a dynamic loop - each system regulating itself and being regulated by the others. Furthermore, the normal range of values for a given system might change due to aging or other circumstances that change the whole organism. For example, the need for oxygen is different during aerobic stress (panic, physical exertion, etc.) than during blissful sleep. So the live organism is not so much a conglomeration of valid values, but more the act of regulation (complex non-linear loop dynamics) that maintains the necessary dynamic balance that can perpetuate the act of regulation. Even DNA can be viewed as a dynamic sub-system. The DNA is not identical in all cells, parts of it can be corrupted, free radicals can steal an electron from a DNA sequence and disrupt it and thus change its function, or it can change in minor or major ways due to interaction. That's probably why the clone of a red cat could be a black cat.

So what is being regulated? This is the wrong question, of course. The living organism is structure. It is the process that gains expression as living cells. There is no basic material that becomes a cell. For example, if you put carbon and water and oxygen in a cauldron, you still won't have a live cell. Asking what is being regulated is like asking what are the basic axioms, the indubitable fundamental truths, the four pillars of the universe. The loop logic is not about that. It is not about correct initial conditions that bring about desired results. It is not a language wherein each constituent part is well defined by the others, but a dynamic structure that eventually stabilizes. In the case of the living organism, structure stabilizes into the right space of action of a given biological sub-system. This sub-system interacts with other sub-systems of the organism, and this non-linear interaction re-stabilizes all the sub-systems, which dynamics is the regularization of the whole organism. There is nothing uninfluenced or non-interactive within the live organism. The live organism is its function, which is, regulation of its sub-systems so it can continue to interact with itself and the rest of the world - or in other words, survive.

The loop logic does not deal with defined entities as something given within a system, but defines all its elements, including the process of definition with which it defines. As SHET put it, "The structure is a loop in that the structure, say, the initial set of conditions, is a kind of tension that produces a flow, a dynamic (the stabilization), which alters the tension to produce the dynamic, which alters the structure to produce the dynamic..." So regularization in such a non-linear system means that everything is being regulated in such a fashion that it becomes something that can relate. This process of becoming is a kind of language by which we define relations that are defined by other relations within the same framework. Such a framework could be a mathematical mapping, the Hebrew language (as mapping), or others. This means that the description of the process of Creation is not unique, it could be achieved via different routes.

The new paradigm taught by SHET is that the basic definitions are not axioms, universal truths, but a dynamic that creates. The dynamic preserves the general symmetry wherein you can start anywhere you want, and then there is simply a beginning. Because every point on the loop reflects the whole loop, it matters not where we start. From every point on the loop we can re-create the whole loop. Precisely because the beginning is not something self-evident and not unique from the point of view of the whole, the symmetry has been preserved; but from the point of view of the chosen profile, it has the possibility of evolution. That implies that there is no meaning in anything beyond the meaning we give to it: there is no truth "out there" that guides us, but a focus of consciousness that provides meaning by creating meaningful constellations of relations, which then can be developed into more and more complex forms of expression.

Creation myths starting with (single) oneness, a matter particle in space from which point time starts rolling, receives the coup de grace from SHET's non-linear way of thinking. With the death of this old paradigm, the walls of the jail that confine our thought processes within parameters that urge us to seek THE TRUTH also collapse. That opens the doors to new ways of thinking, which bring about new behavior patterns of true tolerance.

[1] In English, the word "Truth" is a cognate of the Old Norse word "tryggth," which means faith, hardly a tool to prove consistency.

 

© 2013 Holophany | a New Philosophy and Logic
Joomla! is Free Software released under the GNU/GPL License.