Beyond Good and Evil

Dr. Ronnie J. Hastings

Archive for the category “Evolution”

I Believe!

I must count myself in that school of thought which has asserted that everyone has to believe in many things, but the “trick” is to believe in things that are true. Yet, it seems obvious to me that one can believe in anything.  And, since not just anything can be true, it must be equally obvious that mere belief is no reliable means to finding out the truth.  Curiously, the ability to believe seems basic to the human mind. In my opinion, the pervasiveness of belief among the species Homo sapiens indicates that belief was at the origin of our species necessary for survival, just like our propensity to be religious, or to be ethical, or to be evil.  The evolution of these last three propensities, based upon both physical and cultural anthropology, was a major vehicle in the development of the ideas, themes, and conclusions of 1) my series on the origin of Christianity (Sorting Out the Apostle Paul, [April, 2012]; Sorting Out Constantine I the Great and His Momma, [Feb., 2015]; Sorting Out Jesus, [July, 2015]; At Last, a Probable Jesus, [August, 2015]; Jesus — A Keeper, [Sept., 2015]) and of 2) the first of my series on Perception Theory (Perception is Everything, [Jan., 2016]; Perception Theory (Perception is Everything) — Three Applications, [Feb., 2016]; Perception Theory: Adventures in Ontology — Rock, Dog, Freedom, & God, [March, 2016]).  The discussion of human belief seems a good addition to 2) above, given the very broad applicability of the theory.

For every human mind there seems a hierarchy of importance of beliefs.  Whether or not one believes their sports team is going to win an upcoming contest seems pretty trivial compared to whether or not one believes their partner in life truly loves them; whether or not one believes they can accomplish a challenging task seems pretty trivial compared to whether or not one believes in God.  Moreover, human belief seems intimately entwined with human faith and trust.  Belief in an expected event, in the words of someone else, in the truth of ideas and/or assertions of all sorts, in anticipated future states of the world, and in the truth of past events all involve faith that the object of the belief is worthy of one’s trust.  In other words, I have faith that the resources leading me to believe in X, whatever X may be, are worthy of my trust to the extent I tell myself that X must be true; X is true to me because I have faith in the trustworthiness of believing in X.  Admittedly, this epistemological dissection of belief sounds esoteric, convoluted, and nuanced.  We do not normally think about either the hierarchy or the underlying philosophical assumptions of belief; we just believe, because we come into the world “wired” in our brain to do just that.  What I propose to do is to make thinking about belief less esoteric, convoluted, and nuanced — to make serious consideration of what it is we do when we believe more normal in day-to-day thinking.

In the context of expounding upon freedom of the press in the United States, Alexis de Tocqueville in Democracy in America (The Folio Society, London, 2002) said that a majority of US citizens reflecting upon freedom of the press “…will always stop in one of these two states:  they will believe without knowing why, or not know precisely what one must believe.” (p 179)  It seems to me any area of reflection, not just freedom of the press, could have this quote applied to it, given how muddled together “thinking” and “believing” have seemingly always been in common rational mentation.  So basic is our habit of believing without intellectual meditation and discrimination, being caught between the dilemma of the two states quoted above becomes seemingly all-to-often inevitable.  The hierarchy of importance among beliefs as well as consideration of the roles faith and trust play in belief become lost in an intellectually lazy resignation to the dilemma, in my opinion.

I think we can know why we believe.  I think we can know precisely what we must believe.  Note I did not use “I believe” to start the first two sentences of this paragraph; instead, I used “I think.”  So many thinking people tend to use “I believe” in sentences the same or similar to these and thereby fall into a trap of circular reasoning; they mean “I think,” but utter “I believe.”  I think Perception Theory can help to sort out any nuances associated with belief and point the way to how believing in things that are true is no trick at all, but, rather, a sensible mode of using our mind.  And the first two sentences of this paragraph contain strong clues as to how to relieve “I believe…” and even “I think…” statements from ambiguity.   We just simply give them reliability with the beginning words “I know…,” instead of “I believe…” or “I think…”  Herein I hope to lay out the epistemological process by which statements become reliable and thereafter merit the beginning words “I know…”  At the same time I hope to show that in the name of truth, “I believe” and “I think” should not be necessarily be thrown away, but, rather, used with reticence, care, and candor.

 

I submit that the statement “I believe the sun will appear to rise in the east tomorrow morning.” is fundamentally different from the statement “I believe in the existence of God.”  Neither is irrefutable as, presumably, the speaker cannot deliver an image of a future event, nor is anything remotely resembling a deity alongside the speaker.  According to Perception Theory, any belief statement, certainly including these two, is non-veridical (At Last, a Probable Jesus, [August, 2015]; Perception is Everything, [Jan., 2016]), as a belief is a descriptive statement of some result of the mind, imagination, and other epiphenomenal processes operating within the brain.  As shown in Perception Theory: Adventures in Ontology — Rock, Dog, Freedom, & God, [March, 2016], such statements can resonate strongly, weakly, or not at all with the real or veridical world from which comes all empirical input into the brain through the senses.  The sun rising tomorrow resonates strongly or weakly with the veridical real world (depending upon how skeptical and/or cynical the speaker is), based upon previously experienced (directly or indirectly) sunrises; in terms of Perception Theory: Adventures in Ontology — Rock, Dog, Freedom, & God, [March, 2016], it is resonating non-veridically based.  God existing is, conversely, looped non-veridically based, as defined in Perception Theory: Adventures in Ontology — Rock, Dog, Freedom, & God, [March, 2016].  The second statement is purely epiphenomenal, while the first hearkens to a real empirical world; the second is a naked product of the mind, while the first links an epiphenomenal product to a presumed reality (phenomena) outside the brain.  Belief is in both cases epiphenomenal; the first is based upon empirical, veridical, phenomenal past perceptions; the second is based upon imaginative, non-veridical, epiphenomenal intra-brain biochemical activity.  In other words, sunrises are non-veridical images based upon empirical data, while God is non-veridical imagery based upon other non-veridical imagery.

At the risk of being redundant, it bears repeating that why we have the ability to believe in the two manners illustrated by the two belief statements of the previous paragraph is easily understood.  When our brains evolved the complexity making self-consciousness possible, assuring our survival as a small group of especially big-brained members of the genus Homo, applying our new ability to imagine ourselves in situations other than the present was not practically possible at all times; we still had to react instinctively in threatening situations, without pausing to think about the situation, else we might not survive the situation.  With, say, leopards attacking our little hunter-gatherer group during the night, to question or think about alternatives to proactively defend ourselves potentially would have made the situation more dangerous, not safer; in other words, often whoever hesitated by thinking about the situation got eaten.  Those who came up with or listened to a plan of defense without argument or disagreement tended to assure the success of the plan, as the group agreed to act quickly to avoid future nights of terror; or, often acting unquestionably directly led to successfully solving the leopard problem.  To justify ourselves individually joining the plan, we used our newly complex, self-conscious minds to suspend judgement and believe that the originators of the plan of defense, whether we ourselves, the leaders of the group, the shaman of the group, or just some unspecified member of the group, had some seemingly good idea to deal with the leopard problem; without rationalization of any sort, we believed the plan would work.  Without hesitation, we often believed out of such desperation; we had no choice but to believe in some plan, to believe in something, else we might die.  Hence, those who developed the ability to unthinkingly believe tended to be those who survived in the long run.

I submit that as human beings began civilizations and culture over the last several thousand years, the need for “knee-jerk,” unthinking belief has overall diminished.  Outside of modern totalitarian political, sectarian, or secular regimes, our brains can safely be used to question, scrutinize, vet, and adjudicate ideas, plans, positions, conclusions, etc. as never before.  As knowledge continues to increase, we can without desperation hesitate and “think it over;” immediate belief is not necessary any longer in most situations.  Belief continues to be an option we all use at one time or another, but on important issues we no longer have to suspend judgement and “just believe.”  Don’t get me wrong — spouting beliefs “right and left” on issues of little or no importance, such as what I believe will be the outcome of upcoming sporting events or of the next pull on a slot machine in Las Vegas, can be fun.  What I am saying is that we do not have to agonize over what we believe, as long as the consequences of that belief portends little or nothing at all.  What this means is that we must train ourselves to start serious, important, and substantive declarations with “I think” rather than “I believe,” as I did above, which indicates some rational thought has gone into formulating those declarations.  Moreover, it portends that “I know” is even better than “I think” in that the rational thought going into “I know” statements is so substantive and evidence-based, the statement is reliable and feels close to the “truth.”   It also means we can suspend belief indefinitely, if we choose, or we never need think belief is necessary.

Admittedly, belief does have use in motivational rhetoric, which may not be so trivial in many different individual minds.  Often consensus of agreement for group action relies upon conjuring in individual minds belief that the action is in the group’s collective best interest.  Halftime speeches in the locker room by coaches to their teams is one example that comes to mind; such locker rooms rely upon words and signs exhorting belief; evidence and reflection need not be evoked.  This common use of belief hearkens back to our evolutionary need to believe, as discussed above, but today compelling emotionally-charged adrenaline in a group is more a matter of avoiding losing a game or avoiding falling short of a group goal than it is avoiding being eaten by leopards.  The outcome of the game or striving for the goal determines if the belief was fun and justified, or disappointing and misleading.  Neither outcome might seem trivial to many, but neither outcome would justify the belief conjured to be “true” or “false.”  Locker room belief shown justified or not justified by subsequent events is merely coincidence.

We can now list some characteristics about human belief:

1)  Belief is a non-veridical activity, existing in our minds as either a) resonant non-veridically based  or b) looped non-veridically based.

2)  Belief involves a denial, suspension, or avoidance of judgment, bypassing all forms of adjudication involved in rational scrutiny; it is lazy mentation.

3)  Belief has decreased in importance as culture and knowledge has increased in importance.

4)  Belief is bereft of epistemological value; just because one believes X is true does not necessarily make X true; just because one believes X is false does not necessarily make X false.

5)  Belief is an epiphenomenal, evolutionary vestige of the human mind; it has value today only as an amusing tool in trivial matters or as a rhetorical tool in matters many consider not so trivial.

6)  Beginning with “I think” rather than “I believe” is stronger, and can indicate a closer proximity to the truth, but “I think” does not evoke the confidence and reliability of “I know;” “I think” leaves room for reasonable doubt.

7)  On statements and issues of portent, they can be consistently begun with “I know” rather than “I believe” or “I think.”  Just how this is possible is to follow:

 

Knowing why we believe, we now turn to what we should believe.  Clearly, merely believing in non-trivial matters carries little weight, and is hardly worthy of consideration in epistemological discussions.  Important ideas, plans, and systems of thought do not need belief — they need rational adjudication; we no longer need say “…we need to believe in or think upon what is true;” rather, we need to say “…I know X is true beyond reasonable doubt, independent of what I may believe or think.”  So, we actually now turn to what is worthy of our thought, trusting that in future we will say, instead of “what we should believe” or “what we should think” say “what we know is true.”

Let’s say I want to unequivocally state my conviction that my wife loves me.  To say “I believe my wife loves me.” belies the fact I have lived with the same woman for 48 years and counting, as of this writing.  To say “I believe” in this case sounds like we have just fallen in love (I fell in love with her when we were sophomores in high school together.).  It sounds as if there has not been time to accumulate evidence she loves me transcendent to what I believe.  The truth of the matter is beyond belief, given the 48 years.

If I say “I think my wife loves me.” it can sound as if I may have some doubt and/or there is some evidence that I should doubt, which are/is definitely not the case.  Clearly, in my view, to say “I believe” or “I think” my wife loves me does not do the truth of the matter justice; neither is strongly reliable enough to accurately describe the case from my perspective.

So, it is the case “I know my wife loves me.”  How do I know that?  Evidence, evidence, evidence.  And I’m not talking about saying to each other everyday “I love you,” which we do, by the way.  I am talking evidence transcendent of words.  For 48 years we have never been apart more than a few days, and at night we sleep in the same bed.  For 48 years she daily does so many little things for me over and beyond what she “has” to do.  She is consistently attendant, patient, gentle, caring, and comforting; she is true to her marriage vows daily.  I’ve joked for many years that either she loves me, or she is collecting data for writing a novel about living decades with an impossible man.  Truly, love is blind.

This example illustrates the 3-step process that has come to work for me at arriving at personally satisfying truth.  I’ve even personalized the steps, naming Step 1 for my younger son Chad when he was an elementary school student; Step 2 is named for my younger granddaughter Madison, Chad’s daughter, when she was in the 3rd grade; Step 3 is named for my older granddaughter Gabriella, my older son Dan’s daughter, when she was about 3 or 4 years old.  Hence, I call the process the Chad/Madison/Gabriella Method.  The Chad/Madison/Gabriella Method, or CMGM, bypasses “I believe” and “I think” to “I know.”  Transcendent of belief or speculation, CMGM allows me to arrive at the truth; I can confidently achieve reliability, conclusions I can count on; I can and have arrived at decisions, conclusions, and positions upon which I can not only stake my reputation, I can, if necessary, stake my life.

Yet, CMGM does not provide absolute truth, the corner into which so many thinkers paint themselves.  The results of CMGM are highly probable truths, worthy of ultimate risks, as indicated above, but never can my mortal mind declare 100% certainty.  There is always the finite probability the 3-step process CMGM will yield results shown to be false with unknown and/or forthcoming evidence in the future.  The foundation of CMGM is based upon the philosophical premise of the universal fallibility of human knowledge.

How do we arrive, then, at what we know is true, realizing it really has nothing to do with our careless believing or casual thinking?  What are the “nuts and bolts” of the 3-step process CMGM?

Step 1:  When my son Chad was in elementary school, he discovered he had certain teachers to whom he could direct the question “How do you know?” when information was presented to him; for some outstanding teachers he could ask that question without the teacher becoming upset or angry.  He also discovered you could not ask that of certain family members, Sunday School teachers, or other acquaintances without upsetting them.  It is a courageous question, one conjuring in me, his father, great pride. “C,” Step 1, of the method is a universal skepticism declaring literally everything in questionable, including this very sentence.  From the simple to the profound, whenever any declaration is stated, ask “How do you know?

If no evidence is given when answering the question in Step 1, it is the same as if it was not answered at all.  Answers like “Just because…,” “I just believe…,” “I just think….,” “They say that….,” or similar vacuous retorts are no answers at all.  Or, it is possible that some evidence might be cited.  If that evidence is presented as if it should be accepted and be beyond doubt and question because of the authority or reputation of the source of the evidence, that outcome would be taken to Step 2 just like no answer at all is taken to Step 2.  Therefore, after Step 1, one either has 1) no answer or a vacuous answer or 2) cited evidence for the answer.

Step 2:  When my younger granddaughter was in the 3rd grade and I was the subject of a family conversation, she, Madison, said “Papa Doc is big on knowledge.” (Instead of being called “Granddad, Grandfather, or Grandpa, my granddaughters call me “Papa Doc.”)  In other words, gather your own evidence in response to the results of Step 1; “get your ducks in a row” or “get your shit together” or “get your facts straight.”  If you received nothing in response to executing Step 1, then decide if you want to accumulate evidence for or against the original declaration.  If you don’t, dismiss or disregard the reliability of those who made the original declaration; “reset” for the next declaration.  If you decide to accumulate evidence, it is just as if you received evidence cited in support of the original declaration.  Evidence given in Step 1 needs a search for other relevant evidence and, if you decide to respond to no evidence given in Step 1, the same search is needed.  The ability and quality of getting your “ducks/shit/facts” in a row/together/straight is directly proportional to your education (formal or not) and to the amount of personal experience you have.  “M,” Step 2, of the method is identifying reliable information as evidence for or against the declaration in Step 1; it requires not so much courage as it does effort.  Intellectually lazy persons seldom venture as far as Step 2; it requires work, time, and personal research skills whose quantity, price, and outcome are often unknown, so some courage in the form of confidence is needed to accomplish Step 2.  It is the personal challenge of every successful scholar on any level from pre-K through days on Medicare.  On some questions, such as “Should women be given equal rights as men?” or “Who were the United States’ founding fathers?” it takes but moments for me to identify the reliable information, given my long experiences reading US history.  On other questions, such as “How did Christianity originate?” or “Why did the American and French Revolutions proceed on such different paths when both were based upon similar ideals?”, it has taken me years of off-and-on reading to identify the reliable information allowing me, in my own estimation, to proceed to Step 3.

Step 3:  Way before she started school, my older granddaughter Gabriella, listening carefully to family plans casually mentioned for the next day, voluntarily said, “Actually,…..” such-and-such is going to happen.  And, she was correct, despite her extreme inexperience.  “G,” Step 3, is boldly and confidently stating the results indicated by the evidence from Step 2 applied to the original declaration in Step 1.  If the original declaration in C, Step 1, is “X,” and if the evidence from M in Step 2 is “a,b,c,d,…..,” then Step 3 is “Actually, it is not X, but, rather Y, because of a,b,c,d,…..”  Step 3 takes both confidence and courage.  In Step 3 you are “running it up a flag pole to see who salutes it;” you are taking a chance that of those who listen, no one will agree or only a few will agree, and it is almost infinitesimal that all will agree.  Step 3 exposes you to both justified and unjustified criticism.  Step 3 “thickens your skin” and, if critical feedback to your Step 3 is justified and makes sense to you, that feedback can be used to tweak, modify, or redefine Y.  Justified critical feedback possibly can change Y so that the new version is closer to the truth than the old.

Hence, the way to reliable knowledge I’m suggesting , the way to truth, is essentially an internal, personal, mental adjudication; your head is your own judge, jury, prosecution, and defense.  CMGM is suggested as a possible “instruction list” for this adjudication; CMGM works for me, but others might well find another “formula” that works better for them.  CMGM, Steps 1,2,& 3, conjure(s) X and usually change(s) X to Y, based upon a,b,c,d,…..  Y is usually closer to the truth than X, but it is possible X “passes muster” (Step 2) relatively unchanged into Step 3.  It is not unlike how reliable knowledge is accumulated mentally in all areas of science, math, and engineering.  The advantage these three areas have over CMGM is that Y MUST be successfully tested by nature, by the real world, including the “real world” of logic in our heads, and independent investigators/testers also dealing with Y must corroborate with the same independently derived results; some Y’s from CMGM might not be as easily tested, such as “Men and women can never completely understand each other.” or “A different set of universal physical laws were required to create the present set of universal physical laws.” or “At least one other universe exists along with our own.”

 

If I want to make a truth statement, I need to begin it with “I know.”  I need to have “I know” statements backed up with evidence accumulated by personal adjudication produced by mental steps similar to CMGM.  If reliable knowledge and/or truth are not germane to my statements, then I can use “I believe” or “I think,” depending on how close to being important to me these statements are; “I believe” and “I think” have little or no epistemological content.

How do I know X is true?  Chad-as-a-child makes me ask that very question.  I can say “I believe X is true,” as a knee-jerk, off-the-top-of-my-head statement, just to add to the conversational mix; I feel no need to justify it.  Challenged to justify X, Madison-as-a-child reminds me I’ve got to do some scholarly work.  With some brief, cursory thought I might say “I think X is true,” maybe with a piece of evidence ‘a,’ but neither I nor my fellow conversationalists would think such a statement has much epistemological clout worthy of truth seekers.  With Madison’s work and Gabriella’s courage and confidence I sooner or later can say “I know Y is true, to the best of my ability;”  Gabriella-as-a-child tests my intellectual acumen; I must at some time bravely state Y publically, regardless of the consequences.  In all probability X has morphed into Y thanks to the accumulated evidence ‘a,b,c,d,…..’  Y has “epistemological meat” on its “bones.”  Y has brought me closer to the truth; it is a stepping stone with which to draw even closer.

Yes, I do believe all the time in lots of things.  But I think about certain things in whose reliability I’m more confident.  However, I can know a few things in whose reliability and truth I have as much intellectual and emotional confidence as I can muster.  For me, it is better to know than to just believe or to just think.  I am drawn to what you know, not necessarily to what you believe or what you think.

RJH

 

Perception Theory (Perception is Everything) — Three Applications

In the presentation of a theory of human existence, Perception is Everything [Jan., 2016], it was suggested the theory could be applied to almost every aspect of human experience.  The model paints the picture of the objective/subjective duality of human existence as the interactive dual flow (or flux) of real-world, empirical, and veridical data bombarding our senses and of imaginative, conceptual, and non-veridical data generated by our mind, all encased within the organ we call the brain.  The two sides of the duality need not be at odds, and both sides are necessary; the objective and the subjective are in a symbiotic relationship that has evolved out of this necessity; what and who we are simultaneously exist because of this symbiosis that dwells in the head of every human individual.  No two humans are alike because no two symbioses in two brains are alike.

This post is to briefly demonstrate how the perception model of Perception is Everything [Jan., 2016] can be use to contribute insights into I. Development of Self-Consciousness in a Human Infant, II. Education, and III. The Origin of Politics.

 

I. Development of Self-Consciousness in a Human Infant – That the human mind has the ability to develop a concept of “self,” as opposed to “others,” is commonly seen as fundamentally human.  It might not be unique to our species, however, as we cannot perceive as do individuals of other species.  Often pet owners are convinced their dog or cat behaves as if it is aware of its own individuality.  But that might be just too much anthropomorphism cast toward Rover or Garfield by the loving owners.  So fundamental is our self-consciousness, most views would assert its development must commence just after birth, and my perception theory is no exception.

The human baby is born with its “nature” genetically dealt by the parents and altered by the “nurture” of the quality of its gestation within the mother’s womb (or within the “test tube” early on or within the artificial womb of the future).  The world display screen in the head of the baby (Perception is Everything [Jan., 2016]) has to be primitive at birth, limited to whatever could bombard it veridically and non-veridically while in the womb (Can a baby sense empirical data? Can a baby dream?  Are reflex movements of the fetus within her which the mother can feel before birth recorded in the memory of the fetus?)  Regardless of any answers to these questions, perception theory would describe the first moments after the cutting of the umbilical cord as the beginning of a “piece of star-stuff contemplating star-stuff all around it” Perception is Everything [Jan., 2016].  The event causing the baby to take its first breath begins the lifelong empirical veridical flux entering one “side” of the baby’s world display screen, triggering on the other “side” of the screen an imaginative non-veridical flux from the other “side.”  The dual flux has begun; the baby is “alive” as an individual, independent of the symbiosis with its mother’s body; its life as a distinct person has begun.

The unique “long childhood” of Homo sapiens (due to the size-of-the-birth-canal/size-of-the-baby’s-skull-after-9-months’-gestation consideration), the longest “childhood” of any species before the offspring can “make it on its own” —  a childhood necessarily elongated, else we would not be here as a species today — assures the world display screen is so primitive that the first few days, weeks, and months of each of us are never remembered as our memory develops on the non-veridical side of the screen.  It takes a while for memory generated from the empirical veridical flux to be able to create a counter flow of imaginative non-veridical flux back to the screen. Perception is Everything [Jan., 2016] indicates the dual flow is necessary for the screen to become “busy” enough to be noticed by the “mind’s eye,” that within us that “observes” the screen.  No doubt all of us first had our screens filled by perceptions of faces of caretakers (usually dominated by our mother’s face) and sensations of sound, touch, smell, and taste as our bodies adapted to the cycles of eating, eliminating, and sleeping.  Waking hours during which we were doing none of these, we began to focus on the inputs of our senses.  These are the indicators we inevitably process non-veridically how we are aware of these inputs; and just as inevitably we at some point become aware of a “perceiver,” an observer of these inputs; we have an idea of “something” is perceiving, that “something” is relating to our caretaker(s) (whose face(s) we always feel good seeing), and that “something” is us.  In each individual, the development of a subjective “I” is normally “there” in the head in a few months (exact time interval different, probably, for each individual); a distinction between “me” and “not-me” begins.  This distinction is self-consciousness in-the-making, or “proto-self-consciousness.”

That distinction between “me” and “not-me” is vital and fundamental for each piece of star-stuff beginning to contemplate his or her “fellow” star-stuff — contemplation that is constantly painting an increasingly complex world display screen inside his or her head.  Early on, anything that “disappears” when eyes are closed is “not-me;” anything that is hungry, that likes things in a hole below the eyes to quench that hunger, that experiences discomfort periodically way below the eyes, and that feels tactile sensations from different locales in the immediate vicinity (through the skin covering all the body as well as the “hole below,” the mouth) is “me.”  Eventually, “me” is refined further to include those strange appendages that can be moved at will (early volition) and put into the hunger hole below the eyes, two of which are easy to put in (hands and fingers) and two of which are harder to put in (feet and toes).  That face that seems to exist to make “me” feel better and even happy turns out to be part of “not-me” and it becomes apparent that much of “not-me” does not necessarily make “me” feel better, but are interesting nonetheless.  Reality is being sorted out in the young brain into that which is sorted and that which sorts, the latter of which is the “mind’s eye,” self-consciousness.

In time, “me” can move at will and that which can move thus is the “housing” and boundary limiting “me.”  As soon as the faces “me” can recognize are perceived that they represent other “me’s,” then the distinction between “me” and “you” begins, soon followed by “me,” “you,” and “them.”  Some “you’s” and “them’s” don’t look like other “you’s” and “them’s,” such as household pets.  Still other “you’s” and “them’s” don’t move on their own like “me, soon to be ‘I'” does, such as dolls and stuffed animals.  “You’s” and “them’s” separate into two catagories — “alive” and “not-alive.”  As quantity becomes more a developed concept, it soon becomes apparent that there are outside “me” more “not-alives” than “alives;” “not-alives” soon are called “things” and “alives” take on unique identities by learning to recognize and later speak names.  Things are also non-veridically given names, and the genetic ability to quickly learn language “kicks in,” as well as the genetic ability to count and learn math.  In a few months’ time, existence for “me” has become both complex and fixating to its mind/brain, and growing at an increasing rate (accelerated growth).  The name non-veridically given to “me” is the subjective “I” or the objective “myself” — both of which are understood to be self-consciousness.

This clearly is an approach similar to a psychology of infants, which might deal eventually with the development of the ego and the id.  This approach using perception theory allows a seamless tracing of the development of the human mind back before birth, employing a more objective approach to talking about subjectivity than possessed by some other psychological approaches; it is an approach based upon evolutionary psychology.  In addition, it is clear that the emergence of self-consciousness according to perception theory demands a singular definition of the “self” or of “I” or of “myself,” in order to avoid the problems of schizophrenia and its multiple personalities.  Perhaps the widespread phenomenon of children making up “imaginary friends” is an evolved coping mechanism in the individual child’s imagination to order to avoid schizophrenia; an imaginary friend is not the same as the self-consciousness producing such a “friend.”  Just like the individual brain, self-consciousness is singularly unique, in ontological resonance with the brain.

 

II.  Education – Perception theory is compatible with the idea of what education should be.  Education is not a business turning students into future consumers; education is not a sports team turning students into participants; education is not training to turn students into operators of everything from computer keyboards to spaceship control panels.  Instead, education is but the development of students’ minds (1. Education Reform — Wrong Models! [May, 2013], 2. Education Reform — The Right Model [May, 2013], 3. Education Reform — How We Get the Teachers We Need [May, 2013], & Top Ten List for Teachers of HS Students Preparing for College or University (Not a Ranking) — A List for Their Students, Too! [Dec., 2014]).  The word “but” here is somewhat misleading, as it indicates that education might be simple.  However, education is so complex that as yet we have no science of education (#1 on the “List” in Top Ten List for Teachers of HS Students Preparing for College or University (Not a Ranking) — A List for Their Students, Too! [Dec., 2014]).  Perception theory indicates why education is so complex as to defy definition and “sorting out,” Defining education is like the brain trying to define its own development, or, like a piece of star-stuff trying to self-analyze and contemplate itself instead of the universe outside itself.  At this writing, I am inclined to say that a more definitive sorting out of what education is and how it is accomplished inside individual brains is not impossible, as an individual seeing his/her own brain activity is impossible, or, as another person seeing my subjective world display screen in my head is impossible (the “subjective trap”) [Perception is Everything [Jan., 2016]].

Following this optimistic inclination, education is seen as developing in individual brain/minds a continuous and stable dual flow of veridical flux and non-veridical flux upon the individual’s world display screen (Perception is Everything [Jan. 2016]).  A “balance” of this dual flow in Perception is Everything [Jan., 2016] is seen as a desired “mid-point” of a spectrum of sanity, the two ends of which denote extreme cases of veridical insanity and non-veridical insanity.  Therefore, the goal of education is to make the probability of becoming unbalanced and away from this mid-point in either direction as small as possible; in other words, education attempts, ideally, to make in the student’s mind the concentration and focusing of the non-veridical upon the veridical as much as possible.  The non-veridical vigor of “figuring out” the veridical from “out there” outside the brain is matched by the vigor of the empirical bombardment of that same veridical daily data.  Making this focus a life-long habit, making this focus a comfortable, “natural,” and “fun” thing for the non-veridical mind to do for all time is another way to state this goal of education.  Defining education in this manner seems compatible and resonate with the way our mind/brain seems to be constructed (with the necessary duality of the objective and the subjective); our mind/brains seem evolved to be comfortable with being at the mid-point without struggling to getting or staying there; self-educated individuals are those fortunate enough to have discovered this comfort mostly on their own; graduates of educational institutions who become life-long scholars have been guided by teachers and other “educators” to develop this “comfort zone” in their heads.  Education, in this sense, is seen as behaving compatibly with the structure of the brain/mind that has assured our survival as a species over our evolution as a species.  In order to successfully, comfortably, and delightfully spend our individual spans of time in accordance to the evolution of our mind/brains, we must live a mental life of balance of the two fluxes; education, properly defined and thought upon in individual mind/brains, assures this balance, and therefore assures lives of success, comfort, and delight.  He/she who is so educated uses his/her head “in step” with the evolution of their head.

We evolved not to be religious, political, or artistic; we evolved to be in awe of the universe, not to be in awe of the gods, our leaders, or our creations.  We evolved not to be godly, patriotic, or impressive; we evolved to survive so that our progeny can also survive.  Religion, politics, and the arts are products of our cultural evolution invented by our non-veridical minds to cope with surviving in our historical past.  In my opinion these aspects of human culture do not assure the balance of the two fluxes that maximize the probability of our survival.  Only focusing upon the universe of which we are a part will maximize that probability — thinking scientifically and “speaking” mathematically, in other words.  Education, therefore, is properly defined as developing the scientifically focused mind/brain; that is, developing skills of observation, pattern recognition, mathematical expression, skepticism, imagination, and rational thinking.  But it is not an education in a vacuum without the ethical aspects of religion, the social lessons of political science and history, and the imaginative exercises of the arts.  In this manner religious studies, social studies, and the fine arts (not to mention vocational education) all can be seen as ancillary, participatory, and helpful in keeping the balance of the two fluxes, as they all strengthen the mind/brain to observe, recognize, think, and imagine (i.e. they exercise and maintain the “health” of the non-veridical).  I personally think non-scientific studies can make scientific studies even more effective in the mind/brain than scientific studies without them; non-scientific studies are excellent exercises in developing imagination, expression, senses of humor, and insight, attributes as important in doing science as doing non-science.  The “well-rounded” scholar appreciates the role both the objective and the subjective play in the benefit of culture better than the “specialist” scholar, though both types of scholars should understand that the focus of all study, scientific or not, should be upon the veridical, the universe “out there.”  Not everyone can development their talents, interests, and skills in the areas of science, math, engineering, and technology, but those who do not can focus their talents, interests, and skills toward toward developing some aspect of humanity-in-the-universe — toward exploring the limitless ramifications of star-stuff in self-contemplation.

Therefore, education, Pre-K through graduate school, needs a new vertical coordination or alignment of all curricula.  ALL curricula should be taught in a self-critical manner, as science courses are taught (or should be taught if they are not).  An excellent example of what this means was the list of philosophy courses I took in undergraduate school and graduate school.  Virtually all the philosophy courses I took or audited were taught in a presentation of X, of good things about X, and of bad things about X sequence.  In other words, all courses, regardless of level, should be taught as being fallible, not dogmatic, and subject to criticism.  A concept of reliable knowledge, not absolute truth, should be developed in every individual mind/brain so that reliability is proportional to verification when tested against the “real world,” the origin of the veridical flux upon our world display screen; what “checks out” according to a consensus of widely-accepted facts and theories is seen as more reliable than something that is supported by no such consensus.  Hence, the philosophy of education should be the universal fallibility of human knowledge; even the statement of universal fallibility should be considered fallible.  Material of all curricula should be presented as for consideration, not as authoritative; schools are not to be practitioners of dogma or propagators of propaganda.  No change should occur in the incentive to learn the material if it is all considered questionable, as material continues often to be learned in order to pass each and every course through traditional educational assessment (tests, exams, quizzes, etc.).  And one does not get diplomas (and all the rights and privileges that come with them) unless one passes his/her courses.  Certainly the best incentive to learn material, with no consideration of its fallibility other than it’s all fallible, is the reward of knowing for its own sake; for some students, the fortunate ones, the more one knows, the more one wants to know; just the knowing is its own reward.  Would that a higher percentage of present and future students felt that way about what they were learning in the classroom!

The “mantra” of education in presenting all-fallible curricula is embodied in the statement of the students and for the students.  Institutions of learning exist to develop the minds of students; socialization and extracurricular development of students are secondary or even tertiary compared to the academic development of students, as important as these secondary and tertiary effects obviously are.  As soon as students are in the upper years of secondary schooling the phrase by the students should be added to the other two prepositional phrases; in other words, by the time students graduate from secondary schools, they should have first-hand experience with self-teaching and tutoring, and with self-administration through student government and leadership in other student organizations.  Teachers, administrators, coaches, sponsors, and other school personnel who do not do what they do for the sake of students’ minds are in the wrong personal line of work.

Educational goals of schools should be the facilitation of individual student discovery of likes, dislikes, strengths, weaknesses, tastes, and tendencies.  Whatever diploma a student clutches should be understood as completing a successful regimen of realistic self-analysis; to graduate at some level should mean each student knows his/herself in a level-appropriate sense; at each level each student should be simultaneously comfortable with and motivated by a realistic view of who and what he/she is.  Education should strive to have student bodies free of “big-heads,” bullies, “wall-flowers,” and “wimps.”  Part of the non-academic, social responsibility of schools should be help for students who, at any level, struggle, for whatever reason, in reaching a realistic, comfortable, and inspiring self-assessment of themselves.  Schools are not only places where you learn stuff about reality outside the self, they are places where you learn about yourself.  Students who know a lot “outside and inside” themselves are students demonstrating the two fluxes upon their world display screen in their heads are in some sense balanced. (1. Education Reform — Wrong Models! [May, 2013], 2. Education Reform — The Right Model [May, 2013], 3. Education Reform — How We Get the Teachers We Need [May, 2013],  Top Ten List for Teachers of HS Students Preparing for College or University (Not a Ranking) — A List for Their Students, Too! [Dec., 2014], & Top Ten List for Teachers of HS Students Preparing for College or University (Not a Ranking) — A List for Their Students, Too! [Dec., 2014])

Consequently, the only time education should be seen as guaranteeing equality is at the beginning, at the “start-line” the first day in grade K.  Education is in the “business” of individual development, not group development; there is no common “social” mind or consciousness — there is only agreement among individual brain/minds.  Phrases like “no child left behind” has resulted in overall mediocrity, rather than overall improvement.  Obviously, no group of graduates at any level can be at the same level of academic achievement, as each brain has gained knowledge in its own, unique way; some graduates emerge more knowledgeable, more talented, and more skilled than others; diverse educational results emerge from the diversity of our brain/minds; education must be a spectrum of results because of the spectrum of our existence, our ontology, of countless brain/minds.  Education, therefore, should be seen as the guardian of perpetual equal opportunity from day 1 to death, not the champion of equal results anywhere along the way.

[Incidentally, one of the consequences of “re-centering” or “re-focusing” the philosophy, the goals, and the practices of education because of perception theory may be a surprising one.  One aspect of a scientific curriculum compared to, say, an average “humanities” curriculum, is that in science,, original sources are normally not used, unless it is a history and philosophy of science course (Is history/philosophy of science a humanities course?).  I am ending a 40-year career of teaching physics, mostly the first-year course of algebra-based physics for high school juniors and seniors, and, therefore, ending a 40-year career introducing students to the understanding and application of Isaac Newton’s three laws of motion and Newtonian gravitational theory.  Never once did I ever read to my physics students, nor did I ever assign to my physics students to read, a single passage from Philosophiae Naturalis Principia Mathematica, Newton’s introduction to the world of these theories.  Imagine studying Hamlet but never reading Shakespeare’s original version or some close revised version of the original!

The reason for this comparison above is easy to see (but not easy to put in few words for me):  science polices its own content; if nature does not verify some idea or theory, that idea or theory is thrown out and replaced by something different that does a better job of explaining how nature words.  At any moment in historical time, the positions throughout science are expected to be the best we collectively know at that moment.  Interpretations and alternative views outside the present “best-we-know” consensus are the right and privilege of anyone who thinks about science, but until those interpretations and views start making better explanations of nature than the consensus, they are ignored (and, speaking as a scientist, laughed at).

Though many of the humanities are somewhat more “scientific” than in the past — for instance, history being more and more seen as a forensic science striving to recreate the most reasonable scenes of history — they are by definition focused on the non-veridical rather than the veridical.  They are justified in education, again, because they aid and “sharpen” the non-veridical to deal with the veridical with more insight than we have done in the past.  The problems we face in the future are better handled with not only knowledge and application of science, math, engineering, and technology but also with knowledge of what we think about, of what we imagine, of the good and bad decisions we have made collectively and individually in the past, and of the myriad of ways we can express ourselves, especially express ourselves about the veridical “real” world.  Since the original sources of these “humanities” studies are seen as applicable today as they were when written, since they, unlike Newton, were not describing reality, but only telling often imaginative, indemonstrable, and unverifiable stories about human behavior to which humans today can still relate, the original authors’ versions are usually preferred over modern “re-hashes” of the original story-telling.  The interest in the humanities lies in relating to the non-veridical side of the human brain/mind, while the interest in the sciences lies in the world reflecting the same thing being said about it; Newton’s laws of motion are “cool” not because of the personality and times of Isaac, but because they appear to most people today “true;” Hamlet’s soliloquies are “cool” not because they help us understand the world around us, but because they help us understand and deal with our non-veridical selves, which makes their creator, Shakespeare, also “cool;” the laws of motion, not Newton, are today relevant, but Shakespeare’s play is relevant today because in its original form it leads still to a myriad of possibly useful interpretations.  What leads to veridical “truth” is independent of its human source; what leads to non-veridical “stories” is irrevocably labeled by its originator.

To finally state my bracketed point on altered education as begged above the opening bracket, science, math, and engineering curricula should be expanded to include important historical details of scientific ideas, so that the expulsion of the bad ideas in the past as well as the presentation of the good ideas of the present are included.   Including the reasons the expunged ideas are not part of the curriculum today would be the “self-critical” part of science courses.  Science teachers would be reluctant to add anything to the curriculum because of lack of time, true enough, but the clever science teacher can find the few seconds needed to add by being more anecdotal in their lessons, which would require them to be more knowledgeable of the history and philosophy of science.  Hence, all the curriculum in education suggested by perception theory would be similar — cast in the universal presentation of X, of good things about X, and of bad things about X mold.]

 

III.  The Origin of Politics (The “Toxic Twin”) – Perception is Everything [Jan., 2016] makes dealing with human politics straightforward, in that politics not only originated, in all likelihood, just as religion and its attendant theology originated, it has developed along the same lines as theology so similarly that politics could be considered the “toxic twin” of theology, in that it can turn as toxic (dangerous) to humanity as theology can turn. (Citizens! (I) Call For the Destruction of the Political Professional Class [Nov., 2012], Citizens! (II) The Redistribution of Wealth [Jan., 2013], Citizens! (III) Call for Election Reform [Jan., 2013], The United States of America — A Christian Nation? [June, 2012], An Expose of American Conservatism — Part 1 [Dec., 2012], An Expose of American Conservatism – Part 2 [Dec., 2012], An Expose of American Conservatism — Part 3 [Dec., 2012], Sorting Out Jesus [July, 2015], At Last, a Probable Jesus [Sept., 2015], & Jesus — A Keeper [Sept., 2015]) In order for us to survive in our hunter-gatherer past, leaders and organizers were apparently needed as much as shamans, or proto-priests; someone or a group of someones (leader, chief, council, elders, etc.) had to decide what would be the best next thing for the collective group to do (usually regarding the procuring of food for the group’s next eating session or regarding threats to the group from predators, storms, or enemy groups over the next hill, etc., etc.,); just as someone was approached to answer the then unanswerable questions, like where the storms come from and why did so-and-so have to die, leaders of the group were looked to for solving the group’s practical and social problems.  In other words, politics evolved out of necessity, just like religion.  Our non-veridical capabilities produced politics to meet real needs, just as they produced religion to meet real needs.

But, just as theology can go toxic, so can politics and politics’ attendant economic theory.  Voltaire’s statement that those who can make you believe in absurdities can make you commit atrocities applies to political and economic ideology just like it does to gods and god stories.  Anything based purely upon non-veridical imagination is subject to application of Voltaire’s statement.  However, I think politics has an “out” that theology does not.  Theology is epistemologically trapped, in that one god, several gods, or any god story cannot be shown to be truer (better in describing reality) than another god, other several gods, or another god story.  Politics is not so trapped, in my opinion, as it does not have to be “attached at the hip” with religion, as has been demonstrated in human history since the 18th century.  Politics can be shown to be “better” or “worse” than its previous version by comparing the political and social outcome of “before” with “after.”  No political solution solves all human problems, if for no other reasons than such problems continually evolve in a matter of weeks or less, and, no political installment can anticipate the problems it will encounter, even when it has solved the problems of the “before.” Nonetheless, I think one can argue that the fledgling United States of America created by the outcome of the American Revolution and the birth of the U.S. Constitution was better than the colonial regime established in the 13 colonies by the reign of George III.  The same can be said about the independent nations that emerged peacefully from being commonwealths of the British Empire, like India, Canada, and Australia, though the USA, India, Canada, and Australia were and are never perfect and free from “birth pangs.”

What are the political attributes that are “better” than what was “before?”  Many of the references cited just above point out many of them, a list I would not claim to be complete or sufficient.  Overall, however, the history of Western and Eastern Civilization has painfully demonstrated, at the cost of spilling of the blood of millions (Thirty Years’ War, Napoleonic Wars, World War I, World War II, etc.) that theocracies and monarchies are “right out.”  [Here I am applying the philosophy that history is not so much a parade of great individuals, but, rather, is more apply seen as a parade of great ideas — a parade of non-veridical products much better than other such products.]  Democracies only work for small populations, so a representative form of government, a republic, works for larger populations of the modern world.  Clearly, secular autocracies and dictatorships are also “right out.”  Class structure of privilege and groundless entitlement still rears its ugly head even in representative republican governments in the form of rule-by-the-few of power (oligarchies) and/or wealth (plutocracies).  To prevent oligarchies and plutocracies, elected representative government officials should be limited in how long they can serve so that they cannot become a political professional class (limited terms of office); in other words, politicians should be paid so that they cannot make a profit.

[Almost the exact same things can be said of government work staffs and other non-elected officials — the bureaucrats of “big government.”  Terms of service should be on a staggered schedule of limitations so that some “experience” is always present in both the elected and their staffs; bureaucrats should be paid in order that they cannot become a professional class of “bean-counters” at tax payer expense; public service should be kept based upon timely representation, and civil service should be kept based upon a system of timely merit; politicians are elected by voters, and bureaucrats are selected by civil service testing — both groups subject to inevitable replacement.]

This, in turn, calls for severe restrictions on lobbying of elected officials of all types (making lobbying a crime?).  Preventing oligarchies and plutocracies of any “flavor” can only be effective if the overall political philosophy applied is a liberal one (“liberal” meaning the opportunity to achieve wealth, power, and influence while simultaneously working so that others around you (all over the globe) can achieve the same, all without the unjust expense to someone else’s wealth, power, and influence).  The philosophy of such a liberal posture I call “liberalist,” meaning that freedom, equality, and brotherhood (the liberte, egalite, and fraternite of the French Revolution) are all three held constantly at equal strength.  When one or two of the three are reduced at the relative boosting of two or one, respectively, then things like the atrocities of the French Terror, the atrocities of fascism, the atrocities of communism, or the atrocities of unregulated capitalism result.

[The word “equality” in political philosophy as used above must be distinguished from the “equality” issue of education in II. above.  When the US Constitution speaks of “all men are created equal,” that does not mean equal in knowledge, talents, and skills; rather it means a shared, universal entitlement to basic human rights, such as, in the Constitution’s words, “life, liberty, and the pursuit of happiness.”  We all have equal rights, not equal educational results; equal rights does not mean equal brain/minds — something the Terror tragically and horribly did not grasp; equal rights to education does not mean equal knowledge, talents, and skills for graduates — something too many “educators” tragically do not grasp.  Perception theory would suggest political equality is different from educational equality; the word “equality” must be understood in its context, if the appropriate adjective is not used with the noun “equality.”  The difference is crucial; political equality is crucial to the healthy social organization of the species, while educational equality (equal results, not equal opportunity) is tragic and harmful to the individual brain/minds of the species.  Awareness of this difference, or always making this semantic distinction, should avoid unnecessary confusion.]

Certain Western European countries, such as the Scandinavian countries, have shown the future of political systems toward which all nations should strive in accordance to liberal, liberalist views.  If anything is needed by the population at large, then a socialist program is called for to deal with all fairly — such as social security, free public education through university level, postal service, public transportation, universal single-payer health care, public safety, state security, and “fair-share” taxation of all who earn and/or own.  No one is allowed to achieve personal gain through regulated capitalism or through leadership in any of these socialist programs except upon merit, meaning his/her gain (in wealth, power, and/or influence) is not at the unjust loss of someone else, and is based solely upon the successful individual’s talents, skills, and knowledge; competition in capitalism and program leadership is both necessary and in need of limitations. It is OK to “lose” in the game of capitalism, as long as one loses “fair and square;” every business success and every business failure must be laid at the feet of the entrepreneur.  The political system with its social programs is merely the crucible of both individual success and individual failure, continually monitoring and regulating the crucible so as to assure perpetual and equal opportunity for all.  Regulation of the political system crucible is achieved by electors of political leadership and program leadership — regulation keeping the programs, like capitalism, perpetually merit-based, fair, and just.  This is a system of “checks and balances” toward which every political system should strive.

History has taught us that the foregoing is not a description of some “pie-in-the-sky” Utopia; it is a description of what history has painfully taught us as “the way” of avoiding a theology-like toxicity for politics.  Politics is not doomed to be theology’s “toxic twin;” it will be so doomed if the bloody lessons of its past are not heeded.  In my opinion, it really is not complicated: it is better to liberally trade, tolerate, and befriend than to conservatively exploit, distrust, and demonize.  Politically speaking, we need to securely develop a xenophilia to replace our prehistoric and insecure xenophobia.  This “xeno-development” is one of the great lessons taught by the modern world over the last 300 years, and this “xeno-development” is begged by perception theory.

RJH

 

Perception Is Everything

Recently a model of human perception has occurred to me. Perception is like that “screen” of appearance before us in our waking hours that is turned off when we are asleep. Yet, it appears to us it does not really turn off during slumber when we remember dreams we have had before we awoke. The moments just before we “nod off” or just as we awake seem as times when perception is “half-way” turned on. The “fuzziness” of this “half-way switch” is clearly apparent in those mornings we awake and momentarily do not know the location of exactly where we slept.

 

Say I am sitting in an enclosed room with a large card painted uniformly with a bright red color. Focusing upon only my visual sensation, suppressing the facts I am also sensing the tactile signals of sitting in a chair with my feet on the floor as well as peripherally seeing “in the corner of my eye” the walls and other features of the room, I am only visually observing the color “red,” all for simplicity. Light from the card enters my eyes and is photo-electrically and electro-chemically processed into visual signals down my optic nerve to the parts of my brain responsible for my vision. The result of this process is the perception of the color “red” on the “screen” of my perception. If I were to describe this perception to myself I would simply imagine the word “red” in my head (or the word “red” in some other language if my “normal” spoken language was not English); were I to describe this perception to someone else in the room, say, a friend standing behind me, I would say, “I am seeing the color red,” again in the appropriate language.

Yet, if my friend could somehow see into my head and observe my brain as I claimed to be seeing red, that person would not experience my sensation or perception of “red.” He/she would see, perhaps with the help of medical instrumentation, biochemical reactions and signals on and in my brain cells. Presumably when I perceive red at a different moment in time later on, the observer of my brain would see the same pattern of chemical reactions and bio-electrical signals.

 
On the “screen” of my perception, I do NOT see the biochemistry of my brain responsible for my perception of red; were I to observe inside the head of my friend in the room while he/she was also focusing on the red card, I would NOT see his/her “screen” of perception, but only the biochemical and bio-electrical activity of his/her brain. It is IMPOSSIBLE to experience (to perceive) both the subjective perception of red and observe the biochemistry responsible for the same subjective perception within the same person. We can hook up electrodes to our own head to a monitor which we observe at the same time we look at red, but we would only be seeing just another representation of the biochemistry forming our perception, not the biochemistry itself, as well as perceiving the red perception. I call this impossibility “the subjective trap.”

 
And yet, my friend and I make sense of each of our very individual impossibilities, of each of our very personal subjective traps, by behaving as if the other perceives red subjectively exactly the same, and as if our biochemical patterns in our respective brains are exactly the same. We are ASSUMING these subjective and biochemical correlations are the same, but we could never show this is the case; we cannot prove our individual perceptions in our own head are the same perceptions in other heads; we cannot ever know that we perceive the same things that others around us perceive, even if focusing upon the exact same observation. The very weak justification of this assumption is that we call our parallel perceptions, in this scenario, “red.” But this is merely the learning of linguistic labels. What if I were raised in complete isolation and was told that the card was “green?” I would say “green” when describing the card while my friend, raised “normally” would say “red.” (Note I’m stipulating neither of us is color blind.) Such is the nature of the subjective trap.

 
[If one or both of us in the room were color-blind, comparison of visual perceptions in the context of our subjective traps would be meaningless — nothing to compare or assume. In this scenario, another sensation both of us could equally perceive, like touching the surface of a piece of carpet or rubbing the fur of a cute puppy in the room with us, would be substituted for seeing the color red.]

 
The subjective trap suggests the dichotomy of “objective” and “subjective.” What we perceive “objectively” and what we perceive “subjectively” do not seem to overlap (though they seem related and linked), leading to a separation of the two adjectives in our culture, which has a checkered history. Using crude stereotypes, the sciences claim objectivity is good while subjectivity is suspect, while the liberal arts (humanities) claim subjectivity is good while objectivity is ignorable. Even schools, colleges, and universities are physically laid out with the science (including mathematics and engineering) buildings on one end of the campus and the liberal arts (including social studies and psychology) buildings on the other. This is the “set-up” for the “two cultures'” “war of words.” I remember as an undergraduate physics major debating an undergraduate political science major as we walked across campus which has had the greatest impact upon civilization, science or politics? We soon came to an impasse, an impasse that possibly could be blamed, in retrospect over the years, on the subjective trap. Ideas about the world outside us seemed at odds with ideas about our self-perception; where we see ourselves seemed very different from whom we see ourselves; what we are is different from whom we are.

Yet, despite being a physics major and coming down “hard” on the “science side” of the argument, I understood where the “subjective side” was coming from, as I was in the midst of attaining, in addition to my math minor, minors in philosophy and English; I was a physics major who really “dug” my course in existentialism. It was as if I “naturally” never accepted the “two cultures” divide; it was as if I somehow “knew” both the objective and the subjective had to co-exist to adequately describe human experience, to define the sequence of perception that defines a human’s lifespan. And, in this sense, if one’s lifespan can be seen as a spectrum of perception from birth to death of that individual, then, to that individual, perception IS everything.

How can the impossibility of the subjective trap be modeled? How can objectivity and subjectivity be seen as a symbiotic, rather than as an antagonistic, relationship within the human brain? Attempted answers to these questions constitute recent occurrences inside my brain.

 

Figure 1 is a schematic model of perception seen objectively – a schematic of the human brain and its interaction with sensory data, both from the world “outside” and from the mind “inside.” The center of the model is the “world display screen,” the result of a two-way flow of data, empirical (or “real world” or veridical) data from the left and subjective (or “imaginative” or non-veridical) data from the right. (Excellent analogies to the veridical/non-veridical definitions are the real image/virtual image definitions in optics; real images are those formed by actual rays of light and virtual images are those of appearance, only indirectly formed by light rays due to the way the human brain geometrically interprets signals from the optic nerves.) [For an extensive definition of veridical and non-veridical, see At Last, A Probable Jesus [August, 2015]] Entering the screen from the left is the result of empirical data processed by the body’s sense organs and nervous system, and entering the screen from the right is the result of imaginative concepts, subjective interpretations, and ideas processed by the brain. The “screen” or world display is perception emerging to the “mind’s eye” (shown on the right “inside the brain”) created by the interaction of this two-way flow.

 
Figure 1 is how others would view my brain functioning to produce my perception; Figure 1 is how I would view the brains of others functioning to produce their perceptions. This figure helps define the subjective trap in that I cannot see my own brain as it perceives; all I can “see” is my world display screen. Nor can I see the world display screens of others; I can only view the brains of others (outside opening up their heads) as some schematic model like Figure 1. In fact, Figure 1 is a schematic representation of what I see if I were to peer inside the skull of someone else. (Obviously, it is grossly schematic, bearing no resemblance to brain, nervous system, and sense organ physiology. Perhaps many far more proficient in neuro-brain function than I, and surely such individuals in future, can and will correlate those terms on the right side of Figure 1 with actual parts of the brain.)

 
Outside data collectively is labeled “INPUT” on the far left of Figure 1, bombarding all the body’s senses — sight, sound, smell and taste, heat, and touch. Data that stimulates the senses is labeled “PERCEPTIVE” and either triggers the autonomic nervous system to the muscles for immediate reaction (sticking your fingers into a flame) necessarily not requiring any processing or thinking, or, goes on to be processed as possible veridical data for the world display. However, note that some inputs for processing “bounce off” and never reach the world display; if we processed the entirety of our data input, our brains would “overload,” using up all brain function for storage and having none for consideration of the data “let in.” This overloading could be considered a model for so-called “idiot savants” who perceive and remember so much more than the “average” person (“perfect memories”), yet have subnormal abilities for rational thought and consideration. Just how some data is ignored and some is processed is not yet understood, but I would guess that it is a process that differs in every developing brain, resulting in no two brains, even those of twins, accepting and rejecting data EXACTLY alike. What is for sure is that we have evolved “selective” data perception over hundreds of thousands of years that has assured our survival as a species.
The accepted, processed data that enter our world display in the center of Figure 1 as veridical data from the outside world makes up the “picture” we “see” on our “screen” at any given moment, a picture dominated by the visual images of the objects we have before us, near and far, but also supplemented by sound, smell, tactile information from our skin, etc. (This subjective “picture” is illustrated in Figure 2.) The “pixels” of our screen, if you please, enter the subjective world of our brain shown on the right of Figure 1 in four categories – memory loops, ideas, self-perception, and concepts – as shown by the double-headed, broad, and straight arrows penetrating the boundary of the world display with the four categories. The four categories “mix and grind” this newly-entered data with previous data in all four categories (shown by crossed and looped broad, double-headed arrows) to produced imagined and/or reasoned data results back upon the same world display as the moment’s “picture” – non-veridical data moving from the four categories back into the display (thus, the “double-headedness” of the arrows). Thusly can we imagine things before us that are not really there at the moment; we can, for instance, imagine a Platonic “perfect circle” (non-veridical) not really there upon a page of circles actually “out there” drawn upon a geometry textbook’s page (veridical) at which we are staring. In fact, the Platonic “perfect circle” is an example of a “type” or “algorithmic” or symbolic representation for ALL circles created by our subjective imagination so we do not have to “keep up” will all the individual circles we have seen in our lifetime. Algorithms and symbols represent the avoidance of brain overload.

 
From some considered input into our four categories of the brain come “commands” to the muscles and nervous system to create OUTPUT and FEEDBACK into the world outside us in addition to the autonomic nerve commands mentioned above, like the command to turn the page of the geometry text at which we are looking. Through reactive and reflexive actions, bodily communication (e.g. talking), and environmental manipulation (like using tools), resulting from these feedback outputs into the real world (shown at bottom left of Figure 1), we act and behave just as if there had been an autonomic reaction, only this time the action or behavior is the result of “thinking” or “consideration.” (The curved arrow labeled “Considered” leading to the muscles in Figure 1.)

 

Note how Figure 1 places epistemological and existential terms like CONSCIOUSNESS, Imagination, Knowing, Intention & Free Will, and Reason in place on the schematic, along with areas of the philosophy of epistemology, like Empiricism, Rationalism, and Existentialism (at the top of Figure 1). These placements are my own philosophical interpretations and are subject to change and placement alteration indicated by a consensus of professional and amateur philosophers, in conjunction with consensus from psychologists and brain physiologists, world-wide.
Figure 2 is a schematic of the “screen” of subjective perception that confronts us at every moment we see, hear, smell, taste, and/or touch. Figure 2 is again crudely schematic (like Figure 1), in this case devoid of the richness of the signals of our senses processed and displayed to our “mind’s eye.” Broad dashed arrows at the four corners of the figure represent the input to the screen from the four categories on the right of Figure 1 – memory loops, ideas, perception, and concepts. Solid illustrated objects on Figure 2 represent processed, veridical, and empirical results flowing to the screen from the left in Figure 1, and dashed illustrated objects on Figure 2 represent subjective, non-veridical, type, and algorithmic results flowing to the screen from the right in Figure 1. Thus Figure 2 defines the screen of our perception as a result of the simultaneous flow of both veridical and non-veridical making up every waking moment.

PerceptPic1

Figure 1 — A Model of the Objectivity of Perception

 

(Mathematical equations cannot be printed in dashed format, so the solid equations and words, like History, FUTURE, Faith, and PRESENT, represent both veridical and non-veridical forms; note I was able to represent the veridical and non-veridical forms of single numbers, like “8” and certain symbols, like X, equals, and does not equal.) Thus, the solid lightning bolt, for example, represents an actual observed bolt in a thunderstorm and the dashed lightning bolt represents the “idea” of all lightning bolts observed in the past.

 

The “subjective trap” previously introduced above is defined and represented by the rule that nothing of Figure 1 can be seen on Figure 2, and vice-versa. In my “show-and-tell” presentation of this perception model encapsulated in both figures, I present the figures standing on end at right angles to each other, so that one figure’s area does not project upon the area of the other – two sheets slit half-height so that one sheet slides into the other. Again, a) Figure 2 represents my own individual subjective screen of perception no one else can see or experience; b) Figure 1 represents the only way I can describe someone else allegedly perceiving as I. I cannot prove a) and b) are true, nor can anyone else. I can only state with reasonable certainty that both someone else and I BEHAVE as if a) and b) are true. In other words, thanks to the common cultural experience of the same language, my non-color-blind friend and I in the room observing the red-painted card agree the card “is red.” To doubt our agreement that it is red would stretch both our limits of credulity into absurdity.

 
The model described above and schematically illustrated in Figures 1 and 2 can be seen as one way of describing the ontology of human beings, of describing human existence. Looking at Figure 1, anything to the left of the world display screen is the only way we know anything outside our brain exists and anything to the right of the world display screen is the only way we know we as “I’s” exist in a Cartesian sense; anything to the right is what we call our “mind,” and we assume we think with our mind; in the words of Descartes, “I think, therefore I am.” We see our mind as part of the universe being “bombarded” from the left, so we think of ourselves as part of the universe. Modern science has over the centuries given us some incredible ontological insights, such as all physical existence is made up of atoms and molecules and elementary particles; we can objectively or “scientifically” describe our existence, but we do so, as we describe anything else, with our subjective mind; we, as self-conscious beings, describe the veridical in the only way we possibly can – non-veridically. Thus, the model suggests the incredible statement made by scientists and philosophers of science lately. Recalling that atoms are created in the interior of stars (“cooked,” if you please, by nuclear fusion inside stars of various sizes and temperatures) that have long since “died” and spewed out their atoms in

PerceptPic2

Figure 2 — A Model of the Subjectivity of Perception (The “Screen”)

 

contribution to the formation of our own solar system around 13.5 billion earth years ago, and recalling our bodies, including our brains, are made of molecules made from the atoms from dead and gone stars, the statement “We are ‘star-stuff’ in self-contemplation” makes, simultaneously, objective and subjective, or scientific and artistic, “spiritual sense.”

We can veridically “take in,” “observe,” “experience,” or “contemplate” anything from the vast universe outside our body as well as the vast universe inside our body outside our brain while at the same time we can imagine non-veridically limitless ways of “making sense” of all this veridical data by filing it, storing it, mixing it, and thinking about it, all within our brain. We are limitless minds making up part of a limitless universe.

 

As if that was not enough, each of us, as a veridical/non-veridical “package of perception,” is unique. Every human has a unique Figure 1 and a unique Figure 2. Our existence rests upon the common human genome of our species, the genetic “blueprint” that specifies the details of our biological existence. Yet, every individual’s genome is different from every other (even if only by .1% or by a factor of .001), just considering that mutations even for identical twins make their two “blueprints” slightly different once the two organisms exist as separated zygotes in the womb. Moreover, how we behave, and, therefore, how we respond non-veridically to the veridical data we receive individually, even from the same environment shared by others, is mitigated by the unique series of experiences each of us has had in our past. Hence, each person is a unique individual genome subjected to unique environmental experiences, the exact copy of which cannot possibly statistically exist.

 

The world display screen of an individual in any given moment has never been perceived before, nor will it ever be perceived again, as in the next moment the screen is modified by the dual flux of the veridical flux from the left and the non-veridical flux from the right in Figure 1. The life of an individual is a series of receiving this ever-changing dual flux and thinking or acting in the real world upon the basis of this dual flux; it is a series of two-way perceptions. The life of an individual is observed by another individual as a series of perceived behaviors assumed, but never proven, to be generated in the same way as those of the observer. All in the span of a human life is perception; to an individual human being, perception has to be everything.

 

This model suggests to me the absurdity of having objectivity and subjectivity irreconcilably separate; it suggests, rather, that they are inseparable; they go together like, in the words of the song, “horse and carriage” or “love and marriage.” The blending of objective data and imaginative concepts in our brain makes our perception, our conscious “everything,” or existence as a self-conscious being, if you please, possible. What we are is the veridical of our screen of perception; who we are is the non-veridical of the screen. In other words, the scientist is as potentially subjective as the poet, and the poet is as potentially objective as the scientist; they differ only in the emphases on the contents of their respective screens of perception. For the “two sides” of campuses of higher learning to be at “war” over the minds of mankind is absurd – as absurd as the impasse the political science major and I reached in conversation so many years ago.

 
If the above was all the model and its two figures did, its conjuring would have been well worth it, I think, but the above is just the tip of the iceberg of how the model can be applied to human experience. Knowing how prone we are to hyperbole when talking about our “brain children,” I nonetheless feel compelled to suggest this model of conception can be intriguingly applied to almost any concept or idea the human brain can produce – in the sense of alternatively defining the concept using “both worlds,” both the objective and the subjective, instead of using one much more than the other. In other words, we can define with this model almost anything more “humanly” than before; we can define and understand almost anything with “more” of ourselves than we’ve done in the past.

 

Take the concept of the human “soul” for example. It seems to me possible that cultures that use the concept of soul, whether in a sacred or secular sense, whether in the context of religion or psychology, they are close to using the concept of the “mind’s eye” illustrated in Figure 1 of the model. The “mind’s eye” is the subjective “I,” the subjective observer of the screen, the “see-er,” the “smell-er,” the “taste-er,” the “hear-er,” the “touch-er,” the “feel-er” of perception; the soul is the active perceiver of subjective human experience. The soul defines self-consciousness; it is synonymous with the ego. This view is consistent with the soul being defined as the essence of being alive, of being that which “leaves” the body upon death. Objectively, we would say that death marks the ceasing of processing veridical data; subjectively, we would say that death marks the ceasing of producing non-veridical data and the closing of the “mind’s eye.”

 

Yet the soul is a product of the same physiology as the pre-conscious “body” of our evolutionary ancestors. In other words, the soul “stands upon the shoulders” of the id, our collection of instincts hewn over millions of years. So, in addition, we would objectively say that death also marks the ceasing of “following” our instincts physically and mentally; our unique, individual genome stops defining our biological limitations and potentialities. The elements of our body, including our brain, eventually blend to join the elements of our environment. Objectively, we would say death marks our ceasing to exist as a living being. The concept of the soul allows death to be seen as the “exiting” or “leaving” of that necessary to be called “alive.”

 
So, the concept of the soul could be discussed as the same or similar to the concept of the ego, and issues such as when does a developing human fetus (or proto-baby) develop or “receive” a soul/ego, which in turn has everything to do with the issue of abortion, can be discussed without necessarily coming to impasses. (See my The ‘A’ Word – Don’t Get Angry, Calm Down, and Let Us Talk, [April, 2013] and my The ‘A’ Word Revisited (Because of Gov. Rick Perry of Texas), or A Word on Bad Eggs [July, 2013]) I said “could be,” not “will be” discussed without possibly coming to impasses. Impasses between the objective and subjective seem more the norm than the exception, unfortunately; the “two cultures war” appears ingrained. Why?

 
Earlier, I mentioned causally the answer the model provides to this “Why?”. The scientist/engineer and the artist/poet differ in their emphases of either the veridical flux to the world display screen or the non-veridical flux to the same world display screen of their individual brains. By “emphasis” I merely mean assigning more importance by the individual to one flux direction or the other in his/her head. At this point, one is reminded of the “left-brain, right-brain” dichotomy dominating brain/mind modeling since the phenomenon of the bicameral mind became widely accepted. The perception model being presented here incorporates on the non-veridical side of the perception screen both analytical (left) brain activity and emotional (right) brain activity in flux to the screen from the right side of Figure 1. Just like my use of left/right in Figure 1 is not like the use of left/right in bicameral mind/brain modeling, this model of perception is not directly analogous to bicameral modeling. What the perception model suggests, in my opinion, is that the analytical/emotional chasm of the human brain is not as unbridgeable as the “left-brain-right-brain” view might suggest.

More specifically, the perception model suggests that the “normal” or “sane” person keeps the two fluxes to the world display screen in his/her head “in balance,” always one flux mitigating and blending with the other. It is possible “insanity” might be the domination of one flux over the other so great that the dominated flux is rendered relatively ineffective. If the veridical flux is completely dominant, the person’s mind is in perpetual overload with empirical data, impotent to sort or otherwise deal with the one-way bombardment on his/her world display screen; such a person would presumably be desperate to “turn off” the bombardment; such a person would be driven to insanity by sensation. If the non-veridical flux is completely dominant, the person’s mind is in a perpetual dream of self-induced fantasy, sensing with all senses, that which is NOT “out there;” such a person would be driven to insanity by hallucination. In this view, the infamous “acid trips” of the 1960’s induced by hallucinatory drugs such as LSD could be seen as self-induced temporary periods of time in which the non-veridical flux “got the upper hand” over the veridical flux.

This discussion of “flux balance” explains why dreams are depicted in Figure 1 as “hovering” just outside the world display screen. The perception model suggests dreams are the brain’s way of keeping the two fluxes in balance, keeping us as “sane” as possible. In fact, the need to keep the fluxes in balance, seen as the need to dream, may explain why we and other creatures with large brains apparently need to sleep. We need “time outs” from empirical data influx (not to mention “time outs” just to rest the body’s muscular system and other systems) to give dreaming the chance to balance out the empirical with the fanciful on the stage of the world display. Dreams are the mixtures of the veridical and non-veridical not needed to be stored or acted upon in order to prevent overload from the fluxes of the previous day (or night, if we are “night owls”); they play out without being perceived in our sleeping unconsciousness (except for the dreams we “remember” just before we awaken) like files in computer systems sentenced to the “trash bin” or “recycle bin” marked for deletion. Dreams can be seen as a sort of “reset” procedure that prepares the world display screen to ready for the upcoming day’s (or night’s) two-way flux flow that defines our being awake and conscious.

This model might possibly suggest new ways of defining a “scientific, analytical mind” (“left brain”) and comparing that with an “artistic, emotional mind” (“right brain”). Each could be seen as a slight imbalance (emphasis on “slight” to remain “sane”) of one flux over the other, or, better, as two possible cases of one flux mitigating the other slightly more. To think generally “scientifically,” therefore, would be when the non-veridical flux blends “head-on” upon the world display screen with the veridical flux and produces new non-veridical data that focuses primarily upon the world external to the brain; the goal of this type non-veridical focus is to create cause/effect explanations, to problem-solve, to recognize patterns, and to create non-veridically rational hypotheses, or, as I would say, “proto-theories,” or scientific theories in-the-making. Thus is knowledge about the world outside our brain increased. To think generally “artistically,” on the other hand, would be when the non-veridical flux takes on the veridical flux upon the world display screen as ancillary only, useful in focusing upon the “world” inside the brain; the goal of this type non-veridical focus is to create new ways of dealing with likes, dis-likes, and emotions, to evoke “feelings” from morbid to euphoric, and to modify and form tastes from fanciful thinking to dealing emotionally with the external world in irrational ways. Thus is knowledge about what we imagine and about what appears revealed to us inside our brain increased.

With these two new definitions, it is easy to see that we have evolved as a species capable of being simultaneously both scientific and artistic, both “left-brain” and “right-brain;” as I said earlier, the scientist is as potentially subjective as the poet, and the poet is as potentially objective as the scientist. We do ourselves a disservice when we believe we have to be one or the other; ontologically, we are both. Applying the rule of evolutionary psychology that any defining characteristic we possess as a species that we pass on to our progeny was probably necessary today and/or in our past to our survival (or, at minimum, was “neutral” in contributing to our survival), the fact we are necessarily a scientific/artistic creature was in all likelihood a major reason we evolved beyond our ancestral Homo erectus and “triumphed” over our evolutionary cousins like the Neanderthals. When we describe in our midst a “gifted scientist” or a “gifted artist” we are describing a person who, in their individual, unique existence purposely developed, probably by following their tastes (likes and dislikes), one of the two potentialities over the other. The possibility that an individual can be gifted in both ways is very clear. (My most memorable example of a “both-way” gifted person was when I, as a graduate student, looked in the orchestra pit at a production of Handel’s Messiah and saw in the first chair of the violin section one of my nuclear physics professors.) Successful people in certain vocations, in my opinion, do better because of strong development of both their “scientific” and “artistic” potentialities; those in business and in service positions need the ability to simultaneously successfully deal with problem solving and dealing with the emotions of colleagues and clientele. Finding one’s “niche” in life and in one’s culture is a matter of taste, depending on whether the individual feels more comfortable and satisfied “leaning” one way or another, or, being “well-rounded” in both ways.

Regardless of the results of individual tastes in individual circumstances, the “scientist” being at odds with the “artist” and vice-versa is always unnecessary and ludicrous; the results of one are no better or worse than those of another, as long as those results come from the individual’s volition (not imposed upon the individual by others).

 

From the 1960’s “acid rock, hard rock” song by Jefferson Airplane, Somebody to Love:

When the truth is found to be……lies!
And all the joy within you…..dies!
Don’t you want somebody to love?
Don’t you need somebody to love?
Wouldn’t you love somebody to love?
You better find somebody to love!

These lyrics, belted out by front woman Grace Slick, will serve as the introduction to two of the most interesting and most controversial applications of this perception theory. The first part about truth, joy, and lies I’ll designate as GS1, for “Grace Slick Point 1” and the second part about somebody to love I’ll designate as GS2.

Going in reverse order, GS2 to me deals with that fundamental phenomenon without which our cerebral species or any such species could not have existed – falling in love and becoming parents, or, biologically speaking, pair bonding. The universal human theme of erotic love is the basis of so much of culture’s story-telling, literature, poetry, and romantic subjects of all genres. Hardwired into our mammalian genome is the urge, upon the outset of puberty, to pair-bond with another of our species and engage, upon mutual consent, in sexual activity. If the pair is made of two different genders, such activity might fulfill the genome’s “real” intent of this often very complex and convoluted bonding – procreation of offspring; procreation keeps the genes “going;” it is easily seen as a scientific form of “immortality;” we live on in the form of our children, and in our children’s children, and so on. Even human altruism seems to emerge biologically from the urge to propagate the genes we share with our kin.

Falling in love, or pair bonding, is highly irrational, and, therefore a very non-veridical phenomenon; love is blind. When one is in love, the short comings of the beloved are ignored, because their veridical signals are probably blocked non-veridically by the “smitten;” when one is in love, and when others bring up any short comings of the beloved, they are denied by the “smitten,” often in defiance of veridical evidence. If this were not so, if pair bonding was a rational enterprise, much fewer pair bonds would occur, perhaps threatening the perpetuation of the species into another generation. [This irrationality of procreation was no better defined than in an episode of the first Star Trek TV series back in the 1960’s, wherein the half human-half alien (Vulcan) Enterprise First Science Officer Spock (played by Leonard Nimoy) horrifically went apparently berserk and crazy in order to get himself back to his home planet so he could find a mate (to the point of hijacking the starship Enterprise). I think it was the only actual moment of Spock’s life on the series in which he was irrational (in which he behaved like we – fully human.]

GS1 is to me another way of introducing our religiosity, of asking why we are as a species religious. This question jump-started me on my “long and winding road,” as I called it – a personal Christian religious journey in five titles, written in the order they need to be read: 1) Sorting Out the Apostle Paul [April, 2012], 2) Sorting Out Constantine I the Great and His Momma [Feb., 2015], 3) Sorting Out Jesus [July, 2015], 4) At Last, a Probable Jesus [August, 2015], and 5) Jesus – A Keeper [Sept., 2015]. Universal religiosity (which I take as an interpretation of GS1) is here suggested as being like the universality of the urge to procreate, though not near as ancient as GS2. As modern humans emerged and became self-conscious, they had to socially bond into small bands of hunter-gatherers to survive and protect themselves and their children, and the part of the glue holding these bands together was not only pair-bonding and its attendant primitive culture, but the development of un-evidenced beliefs – beliefs in gods and god stories – to answer the then unanswerable, like “What is lightning?” and “How will we survive the next attack from predators or the enemy over the next hill?” In other words, our non-veridical faculties in our brain dealt with the “great mysteries” of life and death by making up gods and god stories to provide assurance, unity, fear, and desperation sufficient to make survival of the group more probable. Often the gods took the shape of long-dead ancestors who “appeared” to individuals in dreams (At Last, a Probable Jesus [August, 2015]). Not that there are “religious genes” like there are “procreate genes,” but, rather, our ancestors survived partly because the genes they passed on to us tended to make them cooperative for the good of the group bound by a set of accepted beliefs – gods and god stories; that is, bound by “religion.”

The “lies” part of GS1 has to do with the epistemological toxicity of theology (the intellectual organization of the gods and god stories) – religious beliefs are faith-based, not evidence-based, a theme developed throughout the five parts of my “long and winding road.” On p. 149 of Jerry A. Coyne’s Faith vs. Fact, Why Science and Religion are Incompatible (ISBN 978-0-670-02653-1), the author characterizes this toxicity as a “metaphysical add-on….a supplement demanded not by evidence but by the emotional needs of the faithful.” Any one theology cannot be shown to be truer than any other theology; all theologies assume things unnecessary and un-evidenced; yet, all theologies declare themselves “true.” As my personal journey indicates, all theologies are exposed by this common epistemological toxicity, yet it is an exposé made possible only since the Enlightenment of Western Europe and the development of forensic history in the form of, in the case of Christianity, higher Biblical criticism. This exposé, in my experience, can keep your “joy” from dying because of “lies,” referring back to GS1.

Both GS1 and GS2 demonstrate the incredible influence of the non-veridical capabilities of the human brain. A beloved one can appear on the world display screen, can be perceived, as “the one” in the real world “out there,” and a god or the lesson of a god story can appear on the world display screen, can be perceived, as actually existing or as being actually manifest in the real world “out there.”

Putting GS1 in more direct terms of the perception model represented by Figures 1 and 2, non-veridical self-consciousness desires the comfort of understandable cause and effect as it develops from infancy into adulthood; in our brains we “need” answers — sometimes any answers will do; and the answers do not necessarily have to have veridical verification. Combining the social pressure of the group for conformity and cooperation, for the common survival and well-being of the group, with this individual need for answers, the “mind,” the non-veridical, epiphenomenal companion of our complex brain, creates a personified “cause” of the mysterious and a personified “answerer” to our nagging questions about life and death in general and in particular; we create a god or gods paralleling the created god or gods in the heads of those around us who came before us (if we are not the first of the group to so create). We experience non-veridically the god or gods of our own making through dreams, hallucinations, and other visions, all seen as revelations or visitations; these visions can be as “real” as the real objects “out there” that we sense veridically. (See At Last, a Probable Jesus [August, 2015] for examples of non-veridical visions, including some of my own.) Stories made up about the gods, often created to further explain the mysteries of our existence and of our experiences personally and collectively, combine with the god or gods to form theology. Not all of theology is toxic; but its propensity to become lethally dangerous to those who created it, when it is developed in large populations into what today are called the world’s “great religions,” and fueled by a clergy of some sort into a kind of “mass hysteria” (Crusades, jihads, ethnic “cleansings,” etc.), makes practicing theology analogous to playing with fire. As I pointed out in Jesus – A Keeper [Sept., 2015], epistemologically toxic theology is dangerously flawed. Just as we have veridically created the potential of destroying ourselves by learning how to make nuclear weapons of mass destruction, we have non-veridically created reasons for one group to try and kill off another group by learning how to make theologies of mass destruction; these theologies are based upon the “authority” of the gods we have non-veridically created and non-veridically “interpreted” or “listened to.” It is good to remember Voltaire’s words, or a paraphrase thereof: “Those who can make you believe absurdities can make you commit atrocities.”

Also remember, the condemnation of toxic theology is not the condemnation of the non-veridical; a balance of the veridical flux and the non-veridical flux was absolutely necessary in the past and absolutely necessary today for our survival as individuals, and, therefore, as a species. Toxic theology, like fantasy, is the non-veridical focused upon the non-veridical – the imagination spawning even more images without checking with the veridical from the “real world out there.” Without reference to the veridical, the non-veridical has little or no accountability toward being reliable and “true.” All forms of theology, including the toxic kind, and all forms of fantasy, therefore, have no accountability toward reality “out there” outside our brains. Harmony with the universe of which we are a part is possible only when the non-veridical focuses upon referencing the veridical, referencing the information coming through our senses from the world “out there.” This is the definition of “balance” of the two fluxes to our world display screens in our heads.

Comparing this balanced flux concept with the unbalanced one dominated by the non-veridical (remember the unbalanced flux dominated by the veridical is brain overload leading to some form of insanity), it is easy to see why biologist Richard Dawkins sees religiosity as a kind of mental disease spread like a mental virus through the social pressures of one’s sacred setting and through evangelism. Immersing one’s non-veridical efforts into theology is in my opinion this model’s way of defining Dawkins’ “religiosity.” In the sense that such immersion can often lead to toxic theology, it is easy to see the mind “sickened” by the non-veridical toxins. Whether Dawkins describes it as a mental disease, or I as an imbalance of flux dominated by the non-veridical, religiosity or toxic theology is bad for our species, and, if the ethical is defined as that which is good for our species, then toxic theology is unethical, or, even, evil.

To say that the gods and god stories, which certainly include the Judeo-Christian God and the Islamic Allah, are all imaginative, non-veridical products of the human mind/brain is not necessarily atheistic in meaning, although I can understand that many a reader would respond with “atheist!” Atheism, as developed originally in ancient Greece and further developed after the European Enlightenment in both Europe and America, can be seen as still another form of theology, though a godless one, potentially as toxic as any other toxic theology. Atheism pushing no god or gods can be as fundamentalist as any religion pushing a god or gods, complete with its dogma without evidence, creeds without justification, evangelism without consideration of the evangelized, and intolerance of those who disagree; atheism can be but another religion. Atheism in the United States has in my opinion been particularly guilty in this regard. Therefore, I prefer to call the conclusions about religion spawned by this perception model as some form of agnostic; non-veridical products of the brain’s imagination might be at their origin religious-like (lacking in veridical evidence or dream-like or revelatory or hallucinatory) but should never be seen as credible (called epistemologically “true”) and worthy of one’s faith, belief, and tastes until they are “weighed” against the veridical information coming into the world display screen; and when they can be seen by the individual as credible, then I would ask why call them “religious” at all, but, rather, call them “objective,” “scientific,” “moral,” “good,” or “common sense.” I suggest this because of the horrendous toxicity with which religions in general and religions in particular are historically shackled.

We do not have to yield to the death of GS1 (When the truth is found to be lies, and all the joy within you dies!); GS2 (Love is all you need, to quote the Beatles instead of Grace Slick) can prevent that, even if our irrational love is not returned. In other words, we do not need the gods and god stories; what we need is the Golden Rule (Jesus – A Keeper [Sept., 2015]). This is my non-veridical “take” on the incredible non-veridical capabilities encapsulated in GS1 and GS2.

Western culture has historically entangled theology and ethics (No better case in point than about half of the Ten Commandments have to do with God and the other half have to do with our relationship to each other.) This entanglement makes the condemnation of theology suggested by this perception model of human ontology an uncomfortable consideration for many. Disentanglement would relieve this mental discomfort. Christianity is a good example of entangled theology and ethics, and I have suggested in Jesus – A Keeper [Sept., 2015] how to disentangle the two and avoid the “dark side” of Christian theology and theology in general.

Ethics, centered around the Golden Rule, or the Principle of Reciprocity, is clearly a product of non-veridical activity, but ethics, unlike theology and fantasy, is balanced with the veridical, in that our ethical behavior is measured through veridical feedback from others like us “out there.” We became ethical beings similarly to our becoming religious beings – by responding to human needs. Coyne’s book Faith vs. Fact, Why Science and Religion are Incompatible points out that in addition to our genetic tendency (our “nature”) to behave altruistically, recognize taboos, favor our kin, condemn forms of violence like murder and rape, favor the Golden Rule, and develop the idea of fairness, we have culturally developed (our “nurture”) moral values such as group loyalty, bravery, respect, recognition of property rights, and other moral sentiments we define as “recognizing right from wrong.” Other values culturally developed and often not considered “moral” but considered at least “good” are friendship and senses of humor, both of which also seem present in other mammalian species, suggesting they are more genetic (nature) than cultural (nurture). Other culture values (mentioned, in fact, in the letters of the “Apostle” Paul are faith, hope, and charity, but none of these three need have anything to do with the gods and god stories, as Paul would have us believe. Still others are love of learning, generosity (individual charity), philanthropy (social charity), artistic expression of an ever-increasing number of forms, long childhoods filled with play, volunteerism, respect for others, loyalty, trust, research, individual work ethic, individual responsibility, and courtesy. The reader can doubtless add to this list. Behaving as suggested by these ideas and values (non-veridical products) produce veridical feedback from those around us that render these ideas accountable and measurable (It is good to do X, or it is bad to do X.) What is good and what is bad is veridically verified, so that moral consensus in most of the groups of our species evolves into rules, laws, and sophisticated jurisprudence (e.g. the Code of Hammurabi and the latter half of the Ten Commandments). The group becomes a society that is stable, self-protecting, self-propagating, and a responsible steward of the environment upon which the existence of the group depends; the group has used its nature to nurture a human ethical set of rules that answers the call of our genes and grows beyond this call through cultural evolution. The irony of this scenario of the origin of ethics is that humans non-veridically mixed in gods and god stories (perhaps necessarily to get people to respond by fear and respect for authority for survival’s sake), and thereby risked infection of human ethics by toxic theology. Today, there is no need of such mixing; in fact, the future of human culture may well hinge upon our ability to separate, once and for all, ethics from theology.

A final example of applying the perception model illustrated by Figures 1 and 2 for this writing is the definition of mathematics. Mathematics is clearly a non-veridical, imaginative product of the human brian/mind; this is why all the equations in Figure 2 need a “dashed” version in addition to the “solid,” as I was able to do for the single numbers like “8.” But why is math the language of science? Why is something so imaginative so empirically veridical? In other words, why does math describe how the world works, or, why does the world behave mathematically?

Math is the quintessential example of non-veridical ideas rigidly fixed by logic and consistent patterns; math cannot deviate from its own set of rules. What “fixes” the rules is its applicability to the veridical data bombarding the world display screen from the “real” world “out there.” If math did not have its utility in the real world (from counting livestock at the end of the day to predicting how the next generation of computers can be designed) it would be a silly game lodged within the memory loops of the brain only. But, the brain is part of the star-stuff contemplating all the other star-stuff, including itself; it makes cosmological “sense” that star-stuff can communicate with itself; the language of that communication is math. Mathematics is an evolutionary product of evolutionary complexity of the human brain; it is the ultimate non-veridical focus upon the veridical. Mathematics is the “poster child” of the balance of the two fluxes upon the world display screen of every human brain/mind. No wonder the philosopher Spinoza is said to have had a “religious, emotional” experience gazing at a mathematical equation on paper! No wonder we should teach little children numbers at least as early as (or earlier than) we teach them the alphabet of their native culture!

Further applications of the perception model suggest themselves. Understanding politics, economics, education, and early individual human development are but four.

I understand the philosophical problem of a theory that explains everything might very well explain nothing. But this perception model is an ontological theory, which necessarily must explain some form of existence, which, in turn, entails “everything.” I think the problem is avoided by imagining some aspect of human nature and culture the model cannot explain. For instance, my simplistic explanation of insanity as a flux imbalance may be for those who study extreme forms of human psychosis woefully inadequate. Artists who see their imaginations more veridically driven than I may have suggested might find the model in need of much “tuning,” if not abandonment. I have found the model personally useful in piecing together basic, separate parts of human experience into a much-more-coherent and logically unified puzzle. To find a harmony between the objective and the subjective of human existence is to me very objective (intellectually satisfying) and subjective (simultaneously comforting and exciting). The problem of explaining nothing is non-existent if other harmonies can be conjured by others. Part of my mental comfort comes from developing an appreciation for, rather than a frustration with, the “subjective trap,” the idea introduced at the beginning.

RJH

Post Navigation