Beyond Good and Evil

Dr. Ronnie J. Hastings

Archive for the tag “mind”

The “Problem” of Free Will

Perception Theory (Perception is Everything, [Jan., 2016]) describes human existence as a perpetual juxtaposition of empirical sense data from the outside, veridical, “real,” objective world outside our brains with imagined data of concepts, ideas, and orders from the “inside,” non-veridical, epiphenomenal subjectivity inside our brains — all projected upon our world view “screen” (perceived by the mind’s “eye”), upon which we simultaneously perceive what we “see” from the real world and what we “see” with our imagination. (Again, see Perception is Everything, [Jan., 2016])  Clearly, the areas of philosophy emphasized by Perception Theory are ontology and epistemology.

Almost any extended discussion of human ontology and epistemology sooner or later gets around to the topic of “free will,” the  problem of whether we have discretionary powers over what we think and do, or, are we slaves to the laws of physics, chemistry, and biochemistry, such that any such discretionary powers are delusional.  Do we have free will or not?

It seems reasonable that Perception Theory has the ability to answer the question of free will and “solve” the problem of free will.

In Perception is Everything, [Jan., 2016] the “subjective trap” is defined as the impossibility of an individual to see both the perception of something like “red” on our world screen inside our heads and the biochemistry within the neurons of our brain we know responsible for causing the perception “red” on our screen.  This impossibility leads to our assuming without proof that our perception of anything is just like someone else’s perception of the same thing.  Were we to look inside the head of that someone else perceiving red, we would see only his/her biochemistry of red, not his/her perception of red.  Hence, because of the subjective trap, we ASSUME others’ perceptions are as our perceptions, but there is no way of justifying that assumption in a scientific, objective way; we justify the assumption only in a practical, utilitarian way — communication among all of us seems to be compatibly possible making this assumption.

Is free will assumed similarly as are the perceptions of others?  If so, it would have to be assumptions within and about the individual mind, not assumptions about the perceptions of others.  Let’s say I am on a pleasant walk among a park’s many walkways and I come to a two-pronged fork in the path of equally appealing potential pathways, and, to all appearances, including my own, I CHOOSE one of the two paths and continue my walk.  Did I choose of my own free will?  A proponent of objective deterministic free will might argue that all my previous experience, if known, would predict with certainty which path I would choose, and only because I cannot command from my memory ALL my experiences (If I could, my brain would be flooded to insanity with stored empirical data.), I delude myself into thinking I flippantly, “for-no-reason,” “just-because-I-feel-like-it,” or randomly chose which path to take; in other words, I do not have free will, but have not the capacity of realizing I do not; my choosing is illusory.  A proponent of subjective free will might just as well argue that I have complete discretion in the two possible states of walking one path or another.  Even if my past experiences tend me toward my left or right, with each new decision I am free to choose either way in disregard of my tendencies, without having to justify that decision to anyone, including myself.  “Choosing without thinking about it” is a hallmark of my exercising what everyone is assumed to have, a free will.  But, just like the objective argument admits the futility of realizing all the assumed factors that “determine” the illusion of free will, the subjective argument irresponsibly assumes a “freedom” of choice ignoring all the physical laws to which the complexity of the brain and its epiphenomenal mind are subject.  Note how both arguments employ non-demonstrable assumptions, implying free will is not demonstrable without such assumptions.

Perception Theory, an admitted blend of the objective and the subjective (Perception is Everything, [Jan., 2016]), suggests both arguments are useful in solving the problem of free will.  The patterns of empirical data that demand strong veridical resonance of the mind with the “outside” world compel science and medicine to conclude all causes and effects, including our apparent free will, to be understandable in terms of particles, fields, and energy.  Yet these particles, fields, and energy are creations, or concepts, or imagined orders of the subjective mind.  (The epistemological “bottom line” of particles, fields, and energy existing outside our brains (mind) is that when we observe external to ourselves as objectively as possible [scientifically], we have to say the universe outside us behaves AS IF all the universe is made of particles, fields, and energy.)  We know how these particles, fields, and energy can demonstrate and explain physical phenomenon throughout the universe, but we do not know how they can be used (yet) to demonstrate how empirical data and previously store ideas can produce veridical and non-veridical projections upon our world screen of perception in our heads.  Similarly, particles, fields, and energy cannot demonstrate (yet) the explanation of free will not being “free” at all.  On the other hand, the “freedom” of the subjective argument cannot be truly free, as our perceptions ultimately are products of “star-stuff” just as much as our brain and body are, and star-stuff is bound by the universe’s demonstrable laws of physical science and life science.

What is suggested by Perception Theory, then, is that just like it is logically impossible for a person to simultaneously experience both her biochemical (objective) perception of red and her non-veridical (subjective) perception of red, it is logically impossible for free will to be both completely deterministic and completely without empirical cause.  In other words, when I appear to exercise free will at the fork of paths I cannot assume my choice is determined NOR can I assume I’ve exercised any kind of free will.

So what is free will, given the logically impossibilities and forced assumptions of both free will’s detractors and proponents?  What is suggested in my mind as a trained physicist is that free will is just like light.  When you ask a physicist what is the nature of light, waves or particles, the answer is “both; it depends upon how light is measured or observed.”  Similarly, free will is neither determined or undetermined.  “Free will” has to be a non-veridical concept, but not a scientific one trying to explain the veridical world outside our brain.  Rather, free will is a concept trying to explain human choice or volition, a behavior of possibilities, just like human love is a behavior of possibilities.  Gravity is a concept that can take on objectivity; free will, like any other human psychological concept, cannot, as DEFINITIVE SELF-STUDY CANNOT BE AS OBJECTIVE AS DEFINITIVE STUDY OF OUTSIDE THE SELF.  When we study the star-stuff that is us, we cannot escape ourselves, so that we cannot ever see ourselves as if we were outside ourselves; we cannot see ourselves objectively like the subjects of physical science.  This is why physics is considered a “hard” science, while psychology is considered a “soft” science.  It is as if the study of our minds has built-in an unavoidable uncertainty principle, like Heisenberg’s uncertainty principle of quantum mechanics.  Just like light can behave differently in different cases, the exercise of our free will can appear deterministic in some cases and wildly free in others.  Two different observers of my choice at the fork of paths could describe my exercise of “free will” differently.  One might say he predicted my choice and the other might say my choice looked completely random to her.  Neither could measure the “amount” of free will I exercised, and, neither could I.  I could recall my choice later as one of conscious or unconscious deliberation, or as one of complete obliviousness to either path, or as one somewhere in between.

All this uncertainty and lack of objective definition suggests that free will is a rationalization of convenience arrived at in the minds of humans over thousands of years to obtain the mental comfort of explanation of particular human behavior in the act of choosing.  Free will is psychological balm soothing the discomfort in trying to answer “Why did I do that?”, or “Why did he do that?”, or “Why did she do that?”  The real answer, down to the neuron, is like education, too complicated to understand entirely.  The non-veridical concept of “free will” or “lack of free will” is assumed as a practical vehicle toward understanding human behavior.  Free will, like concepts of gods or god stories, is a practical and illogical explanation that conveniently and more easily explains behaviors without having to take the trouble to objectively study them; free will makes dealing with human choices efficient.  Free will is an unconscious assumption of the human mind passed on generation to generation directly or indirectly.

So, who is right when it comes to free will, the objective proponent or the subjective proponent?  Both.  Who is wrong when it comes to free will, the objective proponent or the subjective proponent?  Both.  The “problem” of free will is not a problem at all.

 

Yet, any impasse about free will implied by the foregoing discussion is not a “hard” impasse like the subjective trap in Perception is Everything, [Jan., 2016].  Progress can be made toward understanding free will, by, first, dropping the “free” part and just talk about “will,” or just talk about human volition.  So my choice of paths employed above would come to a discussion of my choice being a product of my personal volition in that moment.  Next, one’s volition, or will, can be seen as a well-developed psycho-physio behavior practiced inside the individual from early days of infancy, if not before in the womb (See “I.  Development of Self-Consciousness in a Human Infant” in Perception Theory (Perception is Everything) — Three Applications, [Feb., 2016]).

Part of human self-consciousness is the awareness we can willfully do or think things just by employing an “I want to..” in our mind.  In my opinion, the “feeling,” perception, genetic tendency, or epiphenomenal “extra” for self-consciousness that we can will any action or thought of our own free will is one of many important evolved results of the “Cognitive Revolution” that occurred in our species, according to Harari (Sapiens and Homo Deus), between 70,000 and 12,000 years ago, before the Agricultural Revolution.  Clearly, our conviction we have a will that we control had, and probably still has, survival value — a trait “favored” by our physical and cultural evolution.  Perception Theory emphasizes that, as our self-consciousness was developed, probably around and within the Cognitive Revolution, our imaginations developed the ability of perceiving ourselves independent of our present setting.  That is, we could imagine ourselves not only in the present, but also imagine ourselves in the past or in the future.  Imagining ourselves in this way naturally includes imagining ourselves doing or thinking something in the present, past, or future.  The logical explanation of the cause of our doing or thinking something independent of setting is having the ability to command our thoughts and actions of our imagination; it is logical to us we have a will “barking orders of our judgement or whimsy” within our imagination.  And it is logical to us because we’ve been exercising that will since we were infants, according to our imagination. (Perception Theory (Perception is Everything) — Three Applications, [Feb., 2016])  We can easily imagine all infants, including ourselves when we were one, for the first time reaching out with a hand to touch or grasp some object that is not part of their body; the baby “wanted to” or “willed” his/herself to touch or grasp.

Not only can “will” be seen as a natural evolutionary development in our heads, it can be seen, thanks to modern science, as subject to statistics and probabilities of the complicated.  In the wake of the revolutionary development of the Kinetic Theory of Matter wherein all matter (including our bodies and our brains) is seen as composed of countless particles called atoms or clusters of atoms, molecules, statistical mechanics was developed in place of Newtonian mechanics, which had “no prayer” to describe countless masses moving and colliding with each other.  Statistical measurements, such as temperature, were defined to represent an average value of kinetic energy for all the masses, which tells you nothing about the value for a single particle.  Moreover, the scale of atoms and molecules is quantum mechanical, meaning mechanics are quantum, not Newtonian.  Hence, interactions on an atomic scale, such as the firing of a neuron in a brain cell, are statistical and quantum, not biological in scale and behavior.  In other words, our brain-based non-veridical “mind” exists because of countless neurons (brain cell) quantum mechanically interacting in accordance to biochemistry; just like the “well-defined” big-scale images on our TV screens are produced by atomic-level, quantum solid state circuitry understood in terms of electrons which are so tiny they can only be “seen” indirectly, our “well-defined” imagined images on our world perception screen in our heads are produced by atomic-level, quantum biochemistry within neurons understood in terms of the same electrons.  And all quantum phenomena are “fuzzy,” not fixed, subject to statistical fluctuations and unavoidably described in uncertain probabilities; the appearance of certainty on the scale of our bodies (big-scale) is the statistical mean of atomic “outputs” filtered by our averaging senses to a single result.  When we perceive “red,” the probability that we are perceiving data similar to previous perceptions of red is high, but, statistically, can never, ever be exactly the same, because the same exact set of electrons, atoms, and molecules that produced the previous perception are not available to produce the next; our big-scale senses only deliver the average of countless atomic-level inputs from incoming light data and processed, averaged biochemical data by our retina cells and optic nerve cells.  Imagine how “averaged” must be the non-veridical images on our world screen!  Our “feelings,” perceptions, and convictions are our big-scale utilitarian “averaging” of unimaginably numerous and unfathomably complicated quantum behaviors of the atomic level particles making up our brain.  And each “averaging,” it stands to reason, can never be repeated in detail.  Equally reasonable is the assumption that the averaging only has to be accurate enough to “get us by,” to assure that we survive as a species.

Our “will” is a self-imposed, evolutionary, imagined property describing our subjective “self,” the epiphenomenal result of the long-ago origin of self-awareness and self-consciousness.  It is a psychological, positive, mental “crutch” to attribute to ourselves the ability to conjure actions and thoughts; it is basic to our self-confidence.  There is, however, as best we know, no reason to call it “free.”

Further ontological insight into “will” can only be possible through future understanding, via scientific research, of how the physical, veridical brain can produce epiphenomenal, non-veridical perceptions.  The same research will perhaps make progress toward understanding and, maybe, redefining (“overcoming”) the subjective trap.  Though obviously useful, Perception Theory can be improved with better models and metaphors than veridical, non-veridical, world-view screen, etc.  Building a better theory seems necessary toward better understanding “will” and the subjective trap.

 

RJH

 

Toward an Imagined Order of Everything, Using AVAPS

Perception Theory (Perception Is Everything, [Jan., 2016]; Perception Theory (Perception Is Everything) — Three Applications, [Feb., 2016]; and Perception Theory: Adventures in Ontology — Rock, Dog, Freedom, & God, [March, 2016]) defines human existence in terms of the products of our imagination, products formed by the non-veridical, subjective mind mixing veridical, empirical raw data from our senses with previously formed non-veridical subjective ideas, concepts, and perceptions. These products “appear” on the world display “screen” of our mind’s consciousness (Figure 1 in Perception Is Everything, [Jan., 2016]). These products can be conveniently classified as “imagined orders,” after Yuval Noah Harari (author of Sapiens, A Brief History of Humankind and Homo Deus, A Brief History of Tomorrow).  Any products of the human mind that have been shared partially or wholly across the species throughout cultural history can be called imagined orders, such as plans, ideas, conceptions, inductions, deductions, scientific theories, political theories, economic theories, philosophies, religions, and ideologies of all ilks.  Since Perception Theory postulates that “Perception is everything” and since all perceptions are products of the non-veridical imagination, it follows that Perception Theory itself is an imagined order.

Using anthropology, archaeology, and history as forensic sciences, directions of human betterment and human progress can be ascertained by comparing the historical effects of different imagined orders across time.  In other words, there are better imagined orders than others, measured in benefits to the species; we need to follow the directions suggested by the “better” imagined orders.  In AVAPS! [May, 2018] it was suggested the “better” imagined orders were those as veridical as possible; in other words, the “better” imagined orders resonated strongly with the veridical, “real” world.  For example, the toxic theology attributed to all religions based upon gods and god stories (Perception Is Everything, [Jan., 2016] and Perception Theory: Adventures in Ontology — Rock, Dog, Freedom, & God, [March, 2016]) is not one of the better imagined orders upon which we should base future imagined orders.  In his song “Imagine,” John Lennon was right to suggest we should imagine no religion.

 

Perception Theory came not only to using Harari’s terminology, but originally came from questions taking years of off-and-on reading to resolve in my head questions like:  “What were the major historical events contributing to the modern world?” (The Big Picture, [Sept., 2011]); “Is the United States a Christian nation?” (The United States of America — A Christian Nation?, [June, 2012]); “Why did the US-like ideals in France devolve during the French Revolution into the Terror?” (Sticks and Stones May Break Our Bones, But Words We Don’t Know Can Also Hurt Us, or, Jesus Was a Liberalist, [March, 2012]); “Why was I never in my 40-year teaching career (within both public and private schools) never intellectually reconciled with the educational system I was supposed to be a part of?” (What is Wrong With Public Education…and What To Do About It, [April, 2012], What is Wrong With Public Education…Briefly Revisited, [April, 2012], 1:  Education Reform — Wrong Models!, [May, 2013], 2:  Education Reform — The Right Model, [May, 2013], 3:  Education Reform — How We Get the Teachers We Need, [May, 2013], Top Ten List for Teachers of HS Students Preparing for College or University (Not a Ranking) – A List for Their Students, Too!, [Dec., 2014], and “Campusology” at Texas A&M and in Education 6-12, [Nov., 2016]); “Why am I so critical of American political conservatism?” (Citizens (I) Call For the Destruction of the Political Professional Class, [Nov., 2012], Citizens (II) The Redistribution of Wealth, [Jan., 2013], Citizens (III) Call for Election Reform, [Jan., 2013], An Expose of American Conservatism — Part 1, [Dec., 2012], An Expose of American Conservatism — Part 2, [Dec., 2012], An Expose of American Conservatism — Part 3, [Dec., 2012], Some Thoughts on Trump’s Election, [Nov., 2016], Dealing with Donald, or, A Citizen’s Survival Guide for Trump’s Apparent Presidency, [Dec., 2016]), 21st Century Luddites?, [March, 2017],  21st Century Tories?, [March, 2017], and Egalite:  A Qualified Virtue, [Feb., 2018]); “How did Christianity (and by implication other ‘world’ religions) come about?” (Sorting Out the Apostle Paul, [April, 2012], Sorting Out Constantine I the Great and His Momma, [Feb., 2015], Sorting Out Jesus, [July, 2015], At Last, a Probable Jesus, [August, 2015], and Jesus — A Keeper, [Sept., 2015]); “What are the historical and political effects of globalization?” (Going Global, [March, 2018]).

The results of reading summarized in the above posts indicate the possibility of talking about an “imagined order of everything,” or “universal imagined order,” or “global imagined order” made of component imagined orders seen as “good” for mankind and devoid of imagined orders shown by anthropology, archaeology, and history as “bad” for mankind.  Indeed, is it possible to imagine such a universal order?; is the indication valid?  What follows is the attempt to answer “yes.”  Many of the posts cited above correspond to “good” component imagined orders making up parts of the universal imagined order.

So far, Perception Theory, as developed by the above sources, suggests the global imagined order should include the following component imagined orders (in no hierarchical listing):  a) ethical, b) political/social, c) economic, d) ecological/environmental/agricultural,  e) educational, and f) scientific.

The imagined structure of the global imagined order has to be applicable to all humankind all over the globe and all humankind who will in future leave the planet to live and work in outer space, and, epistemologically, the components of the global imagined order must not conflict or contradict each other, just as we have today in modern science; the physical sciences do not say one thing while the life sciences say another, conflicting, contradictory thing.  The inclusive group of all of us will be thought of as the “ultimate family” and the components of the global imagined order must be also inclusive, compatible, and cooperative.

 

a) Ethically, individuals need to relate to each other via the Golden Rule, the Principle of Reciprocity — like the philosophy of the ethical teachings of Jesus (Jesus — A Keeper, [Sept., 2015]).  As Jesus — A Keeper, [Sept., 2015] points out, many other thinkers throughout human history — both sacred and secular — before and after the beginnings of Christianity, taught the ethics of the Golden Rule, or the Principle of Reciprocity.  Emphasizing that the Principle of Reciprocity is its own reward, no in-life or afterlife punishment need be taught to young minds.  For this reason and for the sake of avoiding hurting each other due to non-veridical epiphenomenal overload in individual minds, all supernatural gods and god stories should be phased out. (Perception Theory: Adventures in Ontology — Rock, Dog, Freedom, & God, [March, 2016])  John Lennon in “Imagine” sang of not only imagining no religion, but also “no hell below us and above us only sky.”

Harari classifies “religion” as any ideology (non-veridical concept) as anything that can bring together a human group of roughly 150 or more to agree upon a common purpose or action.  He therefore goes on to say that the “religion” of the enlightened West is liberal humanism, wherein the feelings and insights of the individual are supreme, replacing gods and god stories. (For comparison he reminds us of evolutionary humanism, the ideology or “religion” of fascism — which lost out in WWII — and of social humanism, the ideology or “religion” of communism — which collapsed beginning in 1989.)  I prefer to relegate “religion” to any ideology involving gods and god stories; animism and any thought system involving “spirits” (imagined non-veridical concepts) are also relegated to “religion.”  Any form of humanism is, at best, an ethical ideology, in that it attempts to suggest how we should behave toward each other as members of our species.  Therefore, my choice of Jesus’ (and others’) teachings of the Golden Rule could be considered humanistic.  However, I prefer to divorce “religion” from both “ethics” and “humanism.”

All religion, with its gods and god stories, is based upon the dangerous and deplorable “us-them syndrome,” which sooner or later fosters animosity between believers and non-believers.  This syndrome dooms all theologies to toxicity.  As Diderot said, “Sooner or later the moment comes when the concept [of God] that prevented the theft of one ecu [French coin of face value of about $30] causes the cutting of the throats of a hundred thousand men.” [parentheses mine]

Ethics fosters no “us-them syndrome.” (Jesus — A Keeper, [Sept., 2015])  And to me the Principle of Reciprocity is the ethics for us all.

This is not to say that religion and its accompanying theology, as I am defining it, will not be part of human culture eventually.  Being religious is a genetic tendency “built in” by our evolutionary past, but has become unnecessary to our survival, as other assurances have been developed by our minds that contribute reliably to our survival (e.g. science and medicine).  Therefore, religion is delegated to the individual mind henceforward; theology is limited to the individual, thanks to the subjective trap (Perception Is Everything, [Jan., 2016]).  Religion, with its theology, gods, and god stories is a personal matter for the single member of the species.  I have my own personal theology, for instance, and can say, along with Thomas Jefferson, “I am a sect of one.” (The United States of America — A Christian Nation?, [June, 2012], Jesus — A Keeper, [Sept., 2015], Perception Theory (Perception is Everything) — Three Applications, [Feb., 2016], I Believe!, [Oct., 2016], Hope and Faith, [Jan., 2017], and Prayer, [Feb., 2017])

b) Regarding political and social organizing of the human species, whatever avoids war, colonialism, and imperialism of all forms must be avoided.  No grouping of humans must advance itself at the expense of another; exploitation of one nation of another must cease.  The imagined order of egalitarianism must be expanded so that nations cease to be independent of all other nations; we are all stuck on the same planet with, at this time, no alternative; this earth is all we got.  Therefore, egalite must be expanded from egalite only among citizens of a single nation to egalite of every Homo sapiens on the planet  (Sticks and Stones May Break Our Bones, But Words We Don’t Know Can Also Hurt Us, or, Jesus Was a Liberalist, [March, 2012], and Egalite:  A Qualified Virtue, [Feb., 2018]).

The imagined order of the UN needs expanding into a more global UN composed of every nation, nations which cease to have political borders.  All military forces of each nation join the single global UN force for the purpose of keeping the peace worldwide and of responding to human need created by natural disasters anywhere in the world.  Similar to the way individual States in the United States relate to the national federal government, all nations relate to the global government, with responsibilities, resources, and money separated into regional and global designations.  The global government will be a republic both capitalistic and representative similar to those imagined at the births of the American Republic and the French Republic (Sticks and Stones May Break Our Bones, But Words We Don’t Know Can Also Hurt Us, or, Jesus Was a Liberalist, [March, 2012], The United States of America — A Christian Nation?, [June, 2012], For Your Consideration, I Give You…..Tom Paine, [August, 2014], and Egalite:  A Qualified Virtue, [Feb., 2018]).  No nation needs its own militia anymore, as danger to one UN member is danger to all; the peace-keeping global UN force, with no peer anywhere, will assure the protection of life, liberty, property, and rights the world over.

Health care, education, and housing will be provided by the global UN.  (Members in health care will be on a worldwide payroll, supported by worldwide competitive drug manufacturers, cutting-edge medical schools all over the earth, and globally reviewed medical research.)  The legacy of both UNICEF and UNESCO will be strengthened and widened.  Suffrage, the right to vote, will truly be universal.  The whole world will democratically vote to see what behaviors are deemed criminal enough to deny individuals of such rights as freedom and the vote.

The chamber of world representatives as well as the head of the executive part of the world government (a President, General Secretary, Prime Minister, etc.) shall be elected for finite terms by a democratic worldwide vote (not by electors).  A world court shall be periodically reformed from a cadre of elected judges (judges-in-waiting) from each former-sovereign-nation, or nation-state.  The court shall be appointed by a vote from the chamber of representatives (Congress, Parliament, Convention, Assembly, Althing, etc.) and shall preside and settle all disputes between or among nation-states.  All three branches of the world government, the legislative (chamber of world representatives), executive, and judicial (world court) shall be subject to limited terms, ceilings for years of service, and prohibitions to personal gain beyond their salaries.  Conviction of accepting bribes, accepting payments/perks from lobbyists, both corporation and/or political lobbyists, or committing criminal/civil crimes shall result in immediate termination and swift replacement by the germane nation-state government.

All nation-states will be required to limit campaign and election time for choosing members of all three branches of world government to one year or less.  (Citizens (I) Call For the Destruction of the Political Professional Class, [Nov., 2012] and Citizens (III) Call for Election Reform, [Jan., 2013])  In addition, within every nation-state, campaign contributions must have a universal limit per person and must come only from individuals, not corporations or political organizations.  Exceptions to these campaign contribution rules will result in the candidate’s expulsion from the race.

c)  The economic organization of the global UN implies a global economic system — a worldwide capitalism regulated to create both capital to build business and personal wealth.  Taxes on personal income  and investment requirements will be structured to make personal wealth limited, assuring capital will be reinvested into economic growth. (Citizens (II) The Redistribution of Wealth, [Jan., 2013])  Businesses will have incentives to operate with the partnership of the employees (mandatory employee stock ownership and mandatory retirement fund for all employees), so that all within that business have the same incentive to succeed.

Worldwide trade will be the primary modus operandi to insure perpetual world peace.  War to any degree hurts everyone, the least of which way is cutting off trade (death and maiming being the greatest way), but, at the same time, probably the most important way for the species at large.  (Going Global, [March, 2018] and 21st Century Luddites?, [March, 2017])  All economic barriers will come down; there will be no need for tariffs.  There will be a worldwide currency, similar to that in the European Union.  All stock markets will resonate to operate as if at one single site, as world trade makes every regional economy in business partnership with the rest of the world.  Highways on the land, sea, and air will perpetually be filled with exchanged goods.  Hunger, disease, and poverty will become things of the past (like smallpox, polio, and yellow fever) through trade.

d) Ecologically, environmentally, and agriculturally speaking, the home to all of us, the earth, needs to be treated as our one and only hope and treated holistically.  I’m not talking a cult-like worshiping of our planet as some living Gaia, but, rather, the development of a worldwide respect for not only the biosphere, but the great oceanic and geological processes that make our existence possible.  This respect is admittedly teleological, even selfish, as we have to use this planet to generate all the sustenance our species and our fellow species need both now and in the future.

Therefore, agriculture must be guided by environmentalism and ecology, as suggested by the warnings of both Harari and of Mann (1491 and 1493).  The vision of thinkers like Michio Kaku must engage thinkers and planners of the world government.  The world government has to allocate its efforts and resources toward making the land, sea, and air more productive without placing more of our fellow species (both plants and animals) on the endangered list.  Projects of converting sea water into fresh water should dominate most of the future seashores.  The possibility of turning the Sahara and other world deserts green should become more feasible.  All ocean shallows becoming underwater farms should be forthcoming.  Orbiting agricultural stations wherein food is perpetually grown in ideal conditions to feed the entire planet should become commonplace.  In addition, synthetically produced food, such as animal tissue, should be grown in “giant test tubes,” with the goal of not having to eat our domesticated sources of meat; genetic engineering is just as important in agriculture as it is in human medicine.  Synthetically produced food, especially large-scale synthetically product animal protein, can mean the land now needed for pasture can mostly be turned back to natural processes, producing through evolution more genetic vigor needed for the future.

As I said in  AVAPS! [May, 2018], “The world needs more marine biologists, not more missionaries!”

e)  Education needs to become an egalitarian worldwide phenomenon, particularly the education of young minds as practiced in American public schools (Egalite:  A Qualified Virtue, [Feb., 2018]).  This means educational funds for the entire world will come from taxation of personal property in all nation-states and distributed fairly to all nation-states by an educational arm of the the world government.  However, public education as practiced worldwide must be freed from “professional educators” and applied as in undergraduate and graduate college and university faculties, exemplified by such faculties in the United States.  (1:  Education Reform — Wrong Models!, [May, 2013], 2:  Education Reform — The Right Model, [May, 2013], 3:  Education Reform — How We Get the Teachers We Need, [May, 2013], Top Ten List for Teachers of HS Students Preparing for College or University (Not a Ranking) – A List for Their Students, Too!, [Dec., 2014])  A system of public schools from kindergarten level to grade 12 and at least one major four-year research college or university will be established in every nation-state, if not already in place in a given nation-state.  Through school taxes in every nation-state the education of each child from kindergarten through four years of university shall be offered free of charge (provided the student successfully fulfills the requirements of each previous level in college).  The deficiencies of a nation-state to provide such free education to a qualified student will be made up from a world education fund managed by the world government and contributed to annually by all nation-states as part of “membership dues.”

The worldwide curriculum used by all the planet’s schools will feature general physical and cultural anthropology, which will be focused on the cultural history of the particular nation-state.  World history touching upon the cultural histories of every nation-state will be taught in every nation-state.  The language of each nation-state will be taught locally, but the languages designated as “world languages” (how many?) by the world government will be taught in every nation-state.  (Presumably, these world languages, like the languages chosen in the UN today, will be the official languages used in the world government.)  All sciences and mathematics will be taught via a worldwide curriculum; math is treated as the “language of the universe.”  Engineering will have a local focus within a nation-state, along with an engineering curriculum of worldwide scope.  Philosophy curricula will have their universality supplemented by the works of local philosophers within each nation-state.  As part of the worldwide philosophy curriculum, comparative culture over time, including comparative religion, will be offered.

Cooperative research at the university level, which would inevitably be international cooperative research, will emphasize dealing with the challenges of climate change, of artificial intelligence, and of mankind traveling into space.  Architecture, also a worldwide endeavor, will work on novel housing for a presumably increasing global population — housing able to adapt to possible rises of ocean levels; living under the surface of the oceans as well as in space colonies in orbit, on the moon, on Mars, on moons of the gas giants, in interplanetary space, and in interstellar space will be worldwide endeavors.  Funding for all this research will come from local nation-state and worldwide dues contributed to the world education fund, not to mention research grants from corporations.

A given student’s education toward a college or university degree will normally be peppered with study programs abroad in other nation-states and with opportunities throughout to develop artistic and athletic skills.  Academic contests, art expositions, and athletic contests among teams of students from all nation-states will be preludes to worldwide Olympic-style events that include not only athletics, but academics and the arts also.  With sponsorship from their native nation-state, outstanding performers in these areas could be professionals in these areas, expanding the number of such professionals today.  A worldwide educational system will provide stage and lighting for ever-amazing intellectual and physical achievement.

f)  Science and math requirements characterize every level of every student in a worldwide educational system.  The philosophical assumptions and underlying concepts of science and math are replete in the philosophical studies of epistemology, ontology, ethics, and anthropology.  Children learn to count as soon as they learn to speak and read; children learn to test, experiment, and answer their own questions as soon as they are rationally able.  Truth based on evidence rather than authority is taught as early as possible, and scientific skepticism is practiced as early as possible.  Teachers will need to be trained to expect everything they teach be questioned by their students.  History of science will be taught as a parade of great ideas, not a parade of great people.

Next to the classrooms, the most important part of higher education will be scientific research.  It will be up to teachers to develop a science of education, if that is possible.  It will be necessary to develop a robust ethics for science and engineering, presumably based upon the Golden Rule and a dedication to protect and advance the integrity of science itself.  Done right, these precautions will assure that areas such as artificial intelligence, genetic engineering, and robotics will not run amuck with dire consequences for our species.

Most of all, science must be remembered as a non-veridical enterprise of our imaginations, just as theology is.  All areas of study, including science, must function in such a way as to develop the imaginations of all people of all ages; all curricula and all teachers who teach young minds need to stimulate the imaginations of young minds; those who don’t need to be rewritten or asked to find another job, respectively.  And, it almost goes without saying, science needs to be AVAPS; the star-stuff we are must keep focused upon the star-stuff we are not.

 

In summation, then, an imagined order of everything or a global imagined order for all mankind should include:

a) A specific, non-religious ethic of the Golden Rule, or the Principle of Reciprocity; “Do unto others as you would have them do unto you.”

b) A UN-like world government wherein all nations function like States of the United States in a federal government.  This world government has the three branches of the legislative, the executive, and the judicial.  Members of these branches are democratically elected by a worldwide body of voters wherein suffrage is distributed as wide as possible.  It will have jurisdiction over a single, global military force to keep worldwide peace and respond to emergencies everywhere.

c) A planet-wide economic system of regulated capitalism engaged in worldwide free trade within a single universal market.

d) An environmentally conscious planet-preserving agriculture utilizing the best potentials of bio-technology.

e) A worldwide educational system offering a free universal education and funded by a world education fund governed by the world government, offering a globally coordinated curriculum.

and f) A commitment to progress indicated by an imaginative, respectful, and ethical worldwide scientific endeavor.

 

RJH

 

 

 

AVAPS!

Describing human existence in terms of Perception Theory (Perception Is Everything, [Jan., 2016]; Perception Theory (Perception Is Everything) — Three Applications, [Feb., 2016]; and  Perception Theory: Adventures in Ontology — Rock, Dog, Freedom, & God, [March, 2016]) demonstrates that all perceptions, concepts, ideas, and theories are necessarily non-veridical, due to the subjective trap. Logically, it is possible no product of the human subjective mind is faithful to the veridical, “real world,” empirical data bombarding the human senses; it is discomforting, to say the least, considering the possibility that our collective perceptions over time (our existence) have little or no bearing upon an objective, external-to-our-mind, universe “out there” which we presume and assume to exist.  (An in-depth review of the veridical/non-veridical dichotomy can be found in At Last, a Probable Jesus, [August, 2015].)  The source of this discomfort is the ontological baseline assumption that we are star-stuff in self-contemplation (Perception Is Everything, [Jan., 2016]), for the more disengaged our non-veridical perceptions are from the raw data upon which they are based, the more removed we are from our physical building blocks (star-stuff), from the very veridical universe of which we are a constituent part.  In other words, the less we are “at one” with the universe — the less we see ourselves objectively, as we really are.

I am happy to say that Perception Theory uses the cultural history of our species to avoid the non-veridical discomfort and possible despair of disengagement from the veridical universe, thanks to the help of two recent works by historian Yuval Noah Harari, namely Sapiens, A Brief History of Humankind (2011, ISBN 9780099590088, Vintage, London) and Homo Deus, A Brief History of Tomorrow (2015, ISBN 9781784703936, Vintage, London).

It was encouraging to read that Harari, though doing history, was using a human ontology parallel to that of Perception Theory, especially since I had formulated Perception Theory before I had read Harari.  That is, he understands the creative power of the human imagination, the existential, subjective part of human existence as well as the power that same creativity can have on the empirical, objective “real” world of science.  He does not deny the benefits to humanity given us by non-veridical scientific theories, but he understands they cannot be purely objective, can be themselves veridical.  What I call the non-veridical perceptions, concepts, ideas, and theories of our imagination Harari calls “imagined orders,” a term I will use henceforth for brevity.  One class of imagined orders we call “religions” (Perception Theory: Adventures in Ontology — Rock, Dog, Freedom, & God, [March, 2016]; I Believe!, [Oct., 2016]; Hope and Faith, [Jan., 2017]; and Prayer, [Feb., 2017]), with their “toxic theologies.”  Another class would be the imagined orders we call “scientific theories,” with their history of human-altering and planet-altering effects.  Clearly, essentially any well-developed concept in our minds could be call an imagined order.  For example, political theories, economic theories, and ethical theories.

Cultural history has given our species a direction, a clue for our imagined orders.  Ask what imagined orders have vaulted us into the world-wide collective of the “modern world,” in which we are benefactors of healthier, longer lifespans, of being relatively free, for the first time, of large-scale famine, plague and war, and of emerging with a consensus idea of the best political, economic, and ethical directions to pursue.  The answers have come in the survival of the political, economic, and ethical catastrophes of the 20th century:  combinations of imagined orders  of applied scientific theories, of non-theological, humanistic religions, of humanistic ethics that reach beyond our own species to other species, of practical capitalistic economies mitigated by tried and true socialist programs like universal health care, universal suffrage, universal, free public education, and care for the elderly, of political cooperatives based upon world trade and free from nationalism and delusions of empire, and of educational imaginative orders based upon how minds actually learn.  In other words, history has witnessed our blindly “stumbling” upon imagined orders that “work” for all of us because those orders take into account the natural more than the imagined supernatural, fanciful, or ill-conceived; we now know more imagined orders that “fit” the universe of which we are a part than ever before.

Consequently, I, as a scientist, am, perhaps, more optimistic about our future than the historian Harari.  It seems straightforward to me that our cultural history mandates we in future make our imagined orders as veridical as we possibly can, like great scientific theories like gravity, quantum mechanics, kinetic theory of matter, chemistry, biochemistry, evolution, and plate tectonics.  That is, make all imagined orders, scientific or not, as veridical as possible; in the vein of KISS!, “keep it simple, stupid!”, AVAPS!, “as veridical as possible!”

 

Here in the 21st century, we know what past mistakes to avoid repeating.  Don’t make the liberal mistake of making equality more important than freedom and brotherhood.  Don’t make the conservative mistakes that wealth cannot be created, that there are only zero-sum economies, that some must gain at the expense of others.  Don’t confuse education with indoctrination.  Don’t base truth upon authority.  Don’t create religions and political theories of intolerance; there are better imagined orders than religions, or nations, or empires.  There are better imagined orders than liberal humanism, social humanism, and scientific humanism (Homo Deus).

With the help of Harari, it is now possible to apply Perception Theory toward better imagined orders for all of us.

RJH

 

P.S.  Thinking toward better imagined orders need not be more complicated than asking the question “What the world needs now is __.”  The Beatles have suggested all you need is love, and that means the human capacity to love ourselves, others, and everything outside ourselves needs to be borne in mind.  Dionne Warwick sang it more directly with her 1966 hit “What the World Needs Now Is Love.”  What do all of us love?  Lots of things, sure, but the things that have grown to benefit everyone, like the inspiring contributions of the humanities and the world-altering improvements to our living on this planet of the sciences, engineering, and medicine seem to beckon our ardor.  Let me plant the seed of: what the world needs now is more scientists, engineers, doctors, environmentalists, social workers, care takers, historians, and philosophers.

Not long ago I attended a Southern Baptist church in which my wife grew up and in which we were married.  At the end of the service a 10th grade girl made public her decision to become a missionary instead of her previous dream of becoming a marine biologist.  My heart sank.  “The world needs more marine biologists, not more missionaries!” I said to myself.

 

RJH

Prayer

Perception Theory (Perception Is Everything, [Jan., 2016]), after being applied to, among other things (Perception Theory (Perception is Everything) — Three Applications, [Feb., 2016]), the existence of God (Perception Theory:  Adventures in Ontology — Rock, Dog, Freedom, & God, [March, 2016]), was taken to the subjects of belief (I Believe!, [Oct., 2016]), hope, and faith (Hope and Faith, [Jan., 2017]). Could prayer be far behind? Of course not.

Rev. Paul M. Burns, son of my good friends Dr. Jim Burns (Ph.D in physics and retired Presbyterian minister) and Judy Burns (award-winning retired public school teacher), has written the book prayer encounters (ISBN 978-1-4497-5194-4, WestBow Press, 2012), whose subtitle is “Changing the World One Prayer at a Time.”  The importance of prayer in the life of so many believers seems obvious; a prayer life is vital to an individual’s faith.  Prayer not only is found in some form in most major religions and in our common exchanges of concern (“I’ll pray for you,” “Pray that will not happen,” “We need to pray together as God’s people,” “I pray, God, You will lead me to understand,” “I pray You will lead me to someone who..”, etc.), it is, as the book’s subtitle suggests, a teleological tool in nature — one means to change the world.  I have prayed with a congregation, lead small groups in prayer, said grace at the dinner table, and had a secret place near where my grandparents lived where I regularly prayed in private.  I have encouraged others to pray for my son Dan when he was hospitalized for a closed head injury years ago responsible for, I think, PTSD effects in his brain today (We All Can Have PTSD, [Jan., 2017]).  Prayer is something with which I am not unfamiliar.

Paul’s book is a series of individual cases in his ministry where prayer was applied toward making someone’s life better, as would be expected.  The days when we pray for our enemies and adversaries to be “smitten” I trust are few, far between, or non-existent.  Each case in the book is engaging, heart-breaking, heart-warming, and inspiring; the book is a good read.  What struck me was that in each case the prayer was not always answered, but in all cases the answer or non-answer is seen, in hindsight, as understandable by faith.  Those emotionally involved in the case praise God if the prayer is answered and explain no answer to the prayer by referring to God’s will.  The spectrum of prayer results in the book triggered my own recollection of personal prayer results — results of praying to which I referred in the previous paragraph.  Ambiguous and sometimes inconsistent outcomes of prayer had triggered my curiosity for years, but I never focused on the question of prayer ontologically until now.  So I ask, what is prayer?  What are we doing in our heads when we pray?  It seems to me Perception Theory can be of help.

I will try to avoid two extremes concerning prayer.  On one hand, prayer is skeptically and/or atheistically dismissed as nonsense, and on the other hand, prayer is communication with God, with gods, or with saints as if you are talking to a deity or a holy one across the breakfast table.  Neither extreme makes sense to me.

From the introduction of Perception Theory to its application to faith (Perception Is Everything, [Jan., 2016]), Perception Theory (Perception is Everything) — Three Applications, [Feb., 2016], Perception Theory:  Adventures in Ontology — Rock, Dog, Freedom, & God, [March, 2016], I Believe!, [Oct., 2016], and (Hope and Faith, [Jan., 2017]), prayer can be inferred as a non-veridical activity relegated or looped inside the mind.  It might have reference to parts of the real, veridical world outside our heads, but like all ideas and concepts, these references are not the actual real world, but, rather, are processed perceptions of empirical data from that world created by the blending of the data and the non-veridical proactive mental processes of the mind confined to the brain.  In the end, prayer is part of the epiphenominal menagerie of creations of our evolved, “big” brain.  Since the existence of God in Perception Theory strongly suggests God is like an “imaginary friend,” then prayer might be as simple as talking to the imaginary friend we carry around in our head as the concept of God.  We confide in real friends out in the veridical world around us as well as idly chit-chat with them; so it is with children who create imaginary friends in their heads.  Communicating with real friends can not only be fun and helpful, it can be downright therapeutic.  Prayer, communicating with our concept of God (or of gods or of saints) in our heads, can also be fun and helpful, but since prayer is seen as “serious” business, then prayer is usually therapeutic.  Hence, along with our capacity to make up gods and god stories, to be religious, came the capacity to make those gods our imaginary, surrogate friends to whom we take our thoughts, mental conflicts, and struggles with the veridical world outside our heads for a “help session.”  We take our burdens, our wishes, our hopes, and our need for answers to the “feet of the Lord,” to the “listener” inside our head, our imaginary friend.  Prayer, therefore, functions as a self-induced psycho-therapy with a modus operandi of confiding in our imaginary God in our head.  As the old Christian hymn to prayer says, “What a friend we have in Jesus, All our sins and grief to bear; What a privilege to carry everything to God in prayer.”

Prayer has survived as a coping tool in our heads, a part of the evolved epiphenominal “baggage” around the concepts of friends, gods, and god stories.  Its survival value is proportional to the importance for the species of individual self-introspection and self-analysis (self-induced psycho-therapy) within our heads.

In Julian Jaynes book The Origin of Consciousness in the Breakdown of the Bicameral Mind (ISBN 0-395-20729-0, Houghton Mifflin, 1976) a fascinating hypothesis was put forth:  Before about 500 BCE, we had evolved a brain with two copies, the left and the right hemispheres, which could communicate (or “talk”) with each other; we had a spare brain, in other words, in case something went wrong (brain damage) with one of them.  This “talk” between hemispheres was like the gods within us — the origin of gods, god stories, theology, and religion; the gods talked to us all the time.  Around 500 BCE human culture had become so complicated and demanding, division of labor had to be relegated out to the different hemispheres of the brain, ceasing the talk of the brain to itself; the gods stopped talking to us in our heads, explaining why so many great religions in which we had to find the gods’ voices outside us (or try and re-find them inside us) arose around this time — Buddhism, Zoroastrianism, Taoism, great Prophets of Israel, and Confucianism.  I am not saying I subscribe to this hypothesis, but its similarity with the idea of “talking with God” when we pray seems to me very compelling.  Prayer is, like the gods very ancient — an epiphenominal, non-veridical means by which we furnish ourselves with “bootstrap” sessions of psycho-therapy, or an evolved tool to keep ourselves sane, perhaps because, as Jaynes suggests, the gods stopped talking to us long ago.

Like religious belief, hope, and faith, prayer is confined within the individual’s “subjective trap” (Perception Is Everything, [Jan., 2016]).  Praying together assumes others’ minds are like our own, something we can never know with any degree of certainty.  Making that assumption, our group prayer sessions (at least two persons) are like mutually agreed-upon group psycho-therapy.  It is understandable how a group prayer can be answered differently in the minds of the group, given the differences of expectations among the individuals’ heads within the group.  If a group explodes in agreement that the prayer is answered, such as when my son came out of his week’s coma that followed his closed head injury, it can be assumed the expectations, hopes, and supplications across the minds in the group during the prayer were very similar, though, of course, that can never be objectively demonstrated (at least not yet).  In this sense, prayer for something explicit to occur is like making a bet, like predicting the future, whether as an individual or as a prayer group.

Let’s say a drought-stricken individual or group prays to God (or to a saint) for rain.  In the “old days” sacrifices of the fruits of the harvest, of animals, or of humans would be offered to induce the deity or deities to answer the prayer for rain.  Today, we’ve pretty much gotten past those requirements, to the “relief” of our fellow plants and animals, I’m sure.   The psycho-analysis model of prayer predicted by Perception Theory would say the prayer for rain serves as a self-induced assurance not to worry so much about the drought, as religious belief and hope transforms into faith the prayer will be answered.  That assurance is not nothing to the supplicant, though any effect of the prayer out in the real veridical world cannot be demonstrated; the assurance is the value and justification of prayer; without it we would worry ourselves silly asking questions for which we cannot possibly find an answer.  Whether it rains or not is really incidental, and simply a matter of chance involving local meteorological conditions, conditions presumed to “play out” whether rain is prayed for or not.  The epistemological/ontological mistake of the supplicant or supplicants is to attribute rain or no rain, attribute the outcome of prayer, to the god or gods inside the brain(s) of the supplicant(s).  The non-veridical concepts of the human mind had nothing to do with what transpired in the veridical world, except to be the non-veridical processed perceptions produced partly by empirical data bombarding the body’s senses for each individual.  It was going to rain or not rain, prayed for or not.  Yet, the religious believer says rain was the answer to prayer, or says no rain is the “will of God” beyond human understanding (or due to some flaw in the prayer and/or in the “hearts” of the supplicant(s)).  No wonder many thinkers are of the opinion religious belief is like a mental illness!  I say that prayer is its own reward, providing therapeutic assurance and lowering stress, regardless if the prayer is “answered” or not.  Seen this way, prayer is neither the hollow nothingness of the atheist, nor is it communication with anything outside the heads of the believer.  It is something in between.

To suggest, as Paul Burn’s book does, that prayer changes things is, therefore, correct in one sense, in my opinion.  It can bring on therapeutic healing inside the mind(s) of the supplicant(s).  My experience is that when I pray, I feel better afterwards.  And though I cannot ever know for sure what is inside the heads of my fellow supplicants because of the subjective trap, the behavior of my fellow supplicants after prayer is consistent with their feeling better also.  In other words, prayer can create “good vibes” in the social collective minds of the supplicants, as it did when family and friends near and far prayed for the recovery of my son.  No wonder back in 1986 when my friend Rev. John Armstrong of Canada asked if I would welcome a Muslim friend to join in the widening circle of prayers for Dan, I said something like, “Absolutely!”  I wanted Christians, Muslims, Jews, Hindus, Buddhists — anyone of any faith — to join in praying for Dan.  I know two things about the outcome of Dan’s ordeal in 1986:  a) the more prayer, the better all of us felt, and b) in the end, Dan made a full physical recovery.

We now know (National Geographic, Dec. 2016) that having faith that healing will come (often fueled by prayer) will trigger the “natural pharmaceutical shelf” in our bodies toward healing with the biochemistry we all inherently have, even if no real medicines (placebos) are only employed.  This could be the key to understanding how the tendency to become religious, along with its attendant prayer, had evolutionary survival value in our deep past.  It therefore is possible that the non-veridical healing inside the minds of prayer supplicants can, if the “good prayer vibes” resonant in the minds of those deemed in need of prayer, has a veridical, real world link (Part of medicine is “bed-side manner.”).  Perhaps prayer can in this way positively change things outside our heads as well as inside, at least to the boundary between our body and the world surrounding it.  Ironically, however, credit for the healing is usually given to the god(s) in our heads thought to be outside our heads, not to the non-veridical tool of prayer in our heads correlating with our biochemistry, or to the attendant physicians plying their skill with modern medicine.

If what is prayed for has to do with something outside the body in the veridical world, like the rain example above, obviously triggering natural pharmaceuticals is not directly germane to the answer or no answer to the prayer (e.g. rain or no rain).  But these biochemicals, like endorphins, could be germane to the therapeutic lowering of stress in the supplicant(s) brain(s); they could be connected to the “real” reward of prayer (self-induced psycho-therapy), which has nothing to do with the prayer’s outcome.

In summary, then, prayer is not nothing, according to Perception Theory.  But it also is not contact with anything outside our bodies; ultimately, it is contact with ourselves within the subjective trap.  That it has value to our well-being has a strong case; psycho-therapy is as important as physical therapy (the two possibly linked by our own body’s physiology), as we’ve known from the days of Freud.  Perception Theory would say that the psycho-therapy of prayer demonstrates this importance back to the dawn of our species.

RJH

 

Hope and Faith

I remember singing in Sunday School, “Have faith, hope, and charity, That’s the way to live successfully, How do I know? The Bible tells me so!”   I assume the song’s words are taken directly from Paul’s epistle to the Corinthians (I Corinthians 13:13, KJV).  The three words faith, hope, and charity are called the “three theological virtues” or just the “three virtues.”  Having sorted out what Perception Theory tells us about “belief” (I Believe! [Oct., 2016]), two of the three, faith and hope, or, in the order I consider them here, hope and faith, will be considered.  Both are related to belief and though both are “separate virtues,” the pair, I intend to show, are very similar in Perception Theory, yet are very distinguishable from one another.  (Perception is Everything, [Jan., 2016]; Perception Theory (Perception is Everything) — Three Applications, [Feb., 2016]; Perception Theory: Adventures in Ontology — Rock, Dog, Freedom, & God, [March, 2016])  Indeed, they are paired conceptually in Hebrews 11:1:  “Now faith is the substance of things hoped for, the evidence of things not seen.” (KJV)

Despite my skepticism Paul should even be call an apostle, much less an accurate describer of Jesus (Sorting Out the Apostle Paul, [April, 2012]) and despite the consensus Paul did not write Hebrews (Priscilla, Barnabas, Luke, Clement of Rome, and Apollos of Alexandria have been proposed as more likely authors of Hebrews than Paul.), the presence of the same two words (hope and faith) together in both KJV verses provides a convenient “cutting board” upon which to dissect the two with Perception Theory.  In I Believe! [Oct., 2016] belief is far from having anything to do with evidence, yet the Hebrews verse links “substance” and “evidence” with faith.

Hence, if this linkage is accurate, faith has more to do with evidence than belief.  In fact, starting from absence of evidence, starting from belief, and heading in the direction of evidence, I see hope first, followed by faith, with evidence (“I know” statements –I Believe! [Oct., 2016]) coming only after faith.  “I believe” statements and “I know” statements, with hope and faith “sandwiched” in between, are all four non-veridical activities of the brain, with “I believe” statements devoid of resonance with the “outside,” real, veridical world beyond the volume of our brains and “I know” statements as resonant with the real, veridical world as they possibly can be (as possibly allowed by the “subjective trap”).  This would suggest that both hope and faith exist as resonating non-veridically based concepts, “in between” the looped non-veridically based existence of “I believe” statements and the strongly veridically-based existence of “I know” statements.  In other words, belief is looped non-veridically based, like God, and hope and faith are possibly resonating non-veridically based, like freedom (Perception Theory: Adventures in Ontology — Rock, Dog, Freedom, & God, [March, 2016]); both hope and faith at first appear to “reach out” to the veridical world in a way belief does not bother to do.

To Perception Theory, however, hope is like a “wish statement” that may or may not resonate veridically.  To hope God hears our prayer is looped non-veridically based, but to hope your sick loved one gets well is resonating non-veridically based.  Hope statements can be in either non-veridically based camp — looped or resonating.  To Perception Theory faith leans strongly toward the resonating non-veridical, like having faith that your sick loved one will actually get well, which means the loved one’s health will be described with “I know” statements of wellness in the future.  If the sick one does not get well, the hope still seems justified, but the faith seems ill-placed; hope cannot ever count on “I know” statements to come, but faith risks counting upon “I know” statements coming.  One’s hope can never be squelched by the real veridical world (it is so looped); one’s faith can (it is so resonate).  Faith, then, is like a “prediction statement,” a declaration that something will in future be supported by evidence, and by, therefore, “I know” statements.  With hope I wish, and with faith I predict or bet.  Moreover, faith is embedded with a confidence in a “real world” outcome, whether justified in hindsight or not.  This confidence reinforces the resonance of faith with the veridical.

Hebrews 11:1, therefore, is way off-base.  Faith cannot be substance or evidence of anything.  I can believe or hope in just anything (wishing); conversely I cannot bet on just anything (predicting) and be considered sane, no matter how confident my faith.  Based upon what we know about the universe that seems to be outside our heads, hoping that unicorns exist can be seen as “cute and charming,” while confidently predicting that unicorns exist will probably been seen as silly.  Stating I have faith that unicorns exist is not evidence that unicorns exist, but stating I hope unicorns exist “gets a pass” from those who demand evidence.  One is simply not taken seriously when hoping, like he/she is when bestowing faith.  Hope is more like belief than faith; faith is more like predicting freedom in a veridical society than hope, but with a confidence often falsely interpreted by others as connected with evidence.

An analogy might be in order:  I am about to witness the results of a wager I’ve made at a casino in Las Vegas, say.  It’s the results of a pull of the handle of a slot machine, the final resting place of the ball in a roulette wheel, a roll of the dice at the craps table, the revealing of the cards at the end of a round of poker, or the public posting of the results of a sporting event I have bet on.  Normally, I hope I win (which is not the same as saying I predict I will win), but if I don’t (if I fail to win), the worst that can happen is the loss of my wager.  However, if I win, any conclusion other than to realize how lucky I am would not be warranted; I happened to beat the odds, the probability of which I knew was very low when I made the bet.  But if I have bestowed faith in winning the wager, as we have seen above, it is almost redundant to say I am betting, that is, predicting that I will win.  (Recall I can place a bet with hope, which is not a prediction.) If I have faith that I will win, predicting that I will win, then the amount of the wager, the bet, relative to my gambling budget, is a measure of the strength of my faith.  If I fail to win, my faith will be seen as ill-placed and in hindsight unnecessary; confidence in my winning (in my faith) in hindsight might seem cruelly laughable.  However, if I win, my faith, along with the confidence attending it, seems (irrationally) justified.  In minds wherein suspension of rationality seems commonplace, the human mind tends to think that the win might not have happened without the faith and its attendant confidence.  But the win would not have happened without the bet, and the confident faith before the results had nothing to do with the win, but too often the faith and its confidence are seen as the “cause” of the win!  Such an irrational conclusion is nothing short of believing in magic; it is a view of the win that is all in the head of the winner, and has nothing to do with the evidence from the real world that actually determined the mechanics of the results.  Perception theory would say that veridically the results, win or lose, were the outcome of random probability; any hope or faith put in the results are non-veridical processes inside the brain (Perception is Everything, [Jan., 2016]).

Now, let’s get to the “elephant in the room,” the “gorilla sitting in the corner.”  Believing that God exists is just like hoping God exists — neither tells one anything about God’s existence, except that God is a concept in the head of the one making the belief statement or the hope statement (Perception Theory: Adventures in Ontology — Rock, Dog, Freedom, & God, [March, 2016]).  Having faith that God exists in the real veridical world bets that, or predicts that, God exists like freedom, a dog, or a rock.  Bets and predictions can fail (as in gambling), as have all bets and predictions concerning both unicorns and God, so far.  Faith in God outside our heads, as faith in unicorns outside our heads, is ill-placed — in terms found in Perception Theory: Adventures in Ontology — Rock, Dog, Freedom, & God, it is absurd.  Unlike freedom, God and unicorns do not resonate with the veridical.  I can think of at least one statement about God in which we can all make an “I know” statement — God is a concept in our heads.  It is curiously difficult not to say we can all have faith that God is a concept in our heads.  Also, curiously, I am betting, have faith, that the concept of God, under “high resolution,” is different for each and every head.  Perhaps this “God difference in every head” will one day be shown to be only a hope (an inescapable belief) or. even perhaps be another “I know” statement.

RJH

We All Can Have PTSD

PTSD (acronym for post-traumatic stress disorder) has started expanding its applicability way beyond its military context, it seems to me.  Historically, the concept of PTSD developed from the stress of combat and other horrors of war causing either damage to brain physiology or to the individual psychology of the mind, or both.  Its symptoms, regardless of particular causes in particular cases, are a myriad of brain disorders that cause mild to chronic disruptions of normal brain function.  In World War I, it was called “shell shock,” and in World War II on in to Vietnam, it was called “combat fatigue.”  I want to make the case that all of us can have shell shock and combat fatigue without experiencing a second of combat, without a speck of horror or brain damage.

My most vivid experience of PTSD in a Vietnam vet was when I was working with faculty members from Waxahachie High School years ago in preparation for a faculty party to be held at the Waxahachie National Guard Armory several years ago.  Helping us build stage sets for party performances was David Simmons, building trades instructor at the high school and a Vietnam vet.  The Waxahachie Guard was moving the last cargo truck out of the building when David, upon hearing the truck’s engine, immediately had a flashback to Vietnam.  He dropped his hammer and had to be helped to sit down on the edge of the stage we were building.  For a few moments, he could not stop the imagery in his head; only when the truck had exited the building did he return to “normal.”  Clearly this was purely mental PTSD, as I am not aware of his suffering a head injury during the war.

Equally clear are PTSD-like cases of closed head injuries, such as result from motorcycle accidents.  I remember my friend Rick Qualls and I visiting a motorcycle accident victim who was seeing blood on the fossils he was collecting; we were “experts” invited by his mother to examine the fossils and help him be a little more critical in his hopefully therapeutic hobby.  We to no avail could convince him his iron-compound stains were not blood or that blood does not normally leave trace fossils.  At least he was not a “vegetable,” but that was little consolation to a mother whose son’s close head injury had interjected tragedy so cruelly into the family.  The son was experiencing something personally real in his head, just as David was in his head inside the armory, but the something was permanent, not temporary, as in David’s case.

I have come to think similarly about my older son Dan, who experienced a closed head injury in 1986 as a freshman in high school with a collision on bicycle with a van.  He is Sylvia’s and my “miracle child,” as he clearly recovered completely from all his physical injuries and almost recovered completely from his brain injuries.  Years after his accident, only the stress of traumatic events like divorce revealed his inability to deal with higher cognitive functions, as now in the past few years he is incapable of finding and holding a job.  Only recently have I recognized his cognitive trauma as PTSD-like, showing symptoms like paranoia, depression, mistrust, and hallucinatory reports.  But his brain recovery was so complete he now has a healthy case of denial, stubbornly refusing to recognize he is behaving abnormally.  But, when seen in comparison to the motorcycle accident victim, our son could have suffered mentally much worse.

Also helping me to recognize my son’s form of PTSD (in my opinion), was my recent development of Perception Theory (Perception is Everything, [Jan., 2016]) and its wide spectrum of applications in our universal experiences (Perception Theory (Perception is Everything) — Three Applications, [Feb., 2016], Perception Theory: Adventures in Ontology — Rock, Dog, Freedom, & God, [March, 2016], and I Believe!, [October, 2016]).  Perception Theory was suggested to me during explaining the role hallucinations played in the origin and development of Christianity (At Last, a Probable Jesus, [August, 2015]), in which I shared my own flashback-like hallucinations.  Emerging from both projects conjured the realization my own non-combat hallucinations (only requiring some kind of trauma of the mind — not necessarily bad or harmful trauma) might mean I too have a form of PTSD, and, by extrapolation, all of us have the capability to empathize with PTSD victims, for we have experienced it ourselves, but have not recognized it as such.

 

I know I can empathize with David, with the motorcycle accident victim, and with my son Dan, for I have had several PTSD flashbacks over the years.  Rather than repeating those in At Last, a Probable Jesus, [August, 2015], I thought I would share with you three others:

1)  I grew up, as I’ve said in my memoirs and in my book SXYTMCHSC1964M4M (ISBN 978-0-692-21783-2, College Street Press, Waxahachie, TX, 2014) {See Fun Read, [August, 2014] to read how to attain a copy}, I grew up simultaneously at three homes, one with my parents in town in Cisco, Texas, and in the two rural homes of both sets of my grandparents outside Cisco.  The “home” of my maternal grandparents, the McKinneys, was completely destroyed by a tornado in May, 2015, a site that belongs to my wife and me nowadays.  For sentimental reasons I had the bulldozer and track hoe “cleaning up” the site leave a surviving iron yard gate still swinging on its hinges, so that any time I want, I can go out there, open the gate, and slam it shut.  That sound it makes when closing conjures images of the house and yard and of me going in and out the gate as a young boy.  I cannot help but see the house and yard, even though they are not there today.  The images are triggered by the slamming of the gate; it’s like being one of Pavlov’s dogs.  There is some possible bad trauma in this example, because of memory of the tornado, but the images are pleasant and very sentimental.  This feels to me as a PTSD-like experience of bittersweet memories and pleasant imagery, triggered by an iron-on-iron collision.  The imagery doesn’t last but a few moments, but can be re-conjured by slamming the gate again.  (This gate triggering also seems to work, at least mildly, on first cousins of mine who spent a lot of time at the site also as young children.)

2)  In the summer of 2007 I arranged a very personal and emotional moment upon myself when I confided in my good friend Bill Adling (See SXYTMCHSC1964M4M.) that I was about to write my life’s novel at the Mirage Hotel and Casino in Las Vegas.  He was the first in whom I confided such information, and I had insisted I tell him in private away from our wives.  The site chosen to reveal my secret to Adling was a neon display advertising the Beatles-based performances of “Love” by Cirque du Soleil at the Mirage.  The display had places at which we could sit.  It is hard to overstate how important the Beatles are and were to Adling’s and my friendship — for example, the two of us, along with our fellow fast friend/high school prankster Bob Berry, claim to be the very first Beatles fans in Cisco as 1963 changed to 1964.  How appropriate a setting for me to share my secret with Adling!  Fast forward to the summer of 2016, when just my wife and I were “taking in” Las Vegas and I was wandering around the casino floor of the Mirage while my wife Sylvia was still playing video poker.  I wandered to the spot where the neon display was 9 years earlier (It was now gone, despite the fact “Love” was still playing — we saw the show again, incidentally.), but I recognized the spot by its surroundings.  And suddenly, here came into my head bright neon lights, Adling’s face, and exchanged words I seemed to remember from almost a decade ago!  It was very fleeting but no less vivid.  The “trauma” must have been the “stress” of keeping the secret from everyone except Adling at the time, but the feeling was exhilarating, making me momentarily almost giddy!  I now look upon this moment as a PTSD-like experience.

3)  The third of this trio is the most PTSD-like to me and, coincidentally, the most gross.  Near the McKinney house of 1) above, my Granddad McKinney, among other animals, raised and kept for selling and butchering (Yes, the tornado left the rock and concrete foundation of the old slaughter house.) hogs, lots of hogs.  Playing in and around the lots, sheds, and barns there as a boy, I was in a constant menagerie of not only hogs, but cattle, chickens, turkeys, and peafowl.  Fast forward to just a few years ago, I had stopped at Brendan Odom’s house (Brendan today leases much of the land my wife and I own, including the McKinney place.), which coincidentally is on the road between where my Granddad McKinney lived and my Granddad Hastings lived, to ask him something.  Away from his house but sort of in the extended front yard was a covered cattle trailer, one of my dad’s old ones, in which Brendan kept wild hogs he had trapped for sale to buyers with customers craving “wild pork.” (Today, because of the collapse of the small-scale hog market, no one today raises hogs such as my grandfather did.)  As I walked by the trailer, I noted there were no hogs in it, but that there recently been some “residents,” as my nose was bombarded by the unmistakable odor of hog shit!  And the imagery flowed in my head of hogs wallowing, hogs sleeping, hogs feeding, and hogs squealing.  I could not stop seeing them!  As David’s trigger was auditory, mine in this moment was olfactory.  I had to walk away almost to the house to get the imagery to stop.  The trauma, as well as the trigger, was the incredibly bad odor, so the images were not particularly pleasant.

 

Perception Theory (Perception is Everything, [Jan., 2016], (Perception Theory (Perception is Everything) — Three Applications, [Feb., 2016], Perception Theory: Adventures in Ontology — Rock, Dog, Freedom, & God, [March, 2016], and I Believe!, [October, 2016]) suggests what is going on in our heads during PTSD experiences.  Some non-veridical trauma in our mind triggers uncontrollable perceptions upon our inner world view, momentarily or permanently blocking or suspending the non-veridical brain mechanisms by which we normally determine that what we are perceiving at the moment “must have been a dream.”  The uncontrollable perceptions seem as real and the controlled perceptions we receive from the “outside world” outside our brains.  They are suspensions of rationality, much like what we do when we fall in love.  Often they make us doubt our sanity, and often we are reluctant to share them with others for fear they will doubt our sanity.  Yet, history has shown they can cover the spectrum of individual perception from the destruction of life, through little or no effect, to the basis of starting a religion or a political movement.

PTSD-like experiences are profound epiphenomenal capabilities of our brain, part of the evolutionary “baggage” that was part of our “big brain” development.  I would guess it was a trait neutral to our survival (or, “tagging along” with our vital survival trait of the ability to irrationally fall in love), and, therefore, could be a vestigial trait passed into our future by the same genes that produce our vital non-veridical existence within our brains (in our minds).  Whatever future research into them brings, I will always be fascinated by their possible triggers within an individual, whether it be combat, closed-head injuries, a sound from the past, the Fab Four, or hog shit.

RJH

I Believe!

I must count myself in that school of thought which has asserted that everyone has to believe in many things, but the “trick” is to believe in things that are true. Yet, it seems obvious to me that one can believe in anything.  And, since not just anything can be true, it must be equally obvious that mere belief is no reliable means to finding out the truth.  Curiously, the ability to believe seems basic to the human mind. In my opinion, the pervasiveness of belief among the species Homo sapiens indicates that belief was at the origin of our species necessary for survival, just like our propensity to be religious, or to be ethical, or to be evil.  The evolution of these last three propensities, based upon both physical and cultural anthropology, was a major vehicle in the development of the ideas, themes, and conclusions of 1) my series on the origin of Christianity (Sorting Out the Apostle Paul, [April, 2012]; Sorting Out Constantine I the Great and His Momma, [Feb., 2015]; Sorting Out Jesus, [July, 2015]; At Last, a Probable Jesus, [August, 2015]; Jesus — A Keeper, [Sept., 2015]) and of 2) the first of my series on Perception Theory (Perception is Everything, [Jan., 2016]; Perception Theory (Perception is Everything) — Three Applications, [Feb., 2016]; Perception Theory: Adventures in Ontology — Rock, Dog, Freedom, & God, [March, 2016]).  The discussion of human belief seems a good addition to 2) above, given the very broad applicability of the theory.

For every human mind there seems a hierarchy of importance of beliefs.  Whether or not one believes their sports team is going to win an upcoming contest seems pretty trivial compared to whether or not one believes their partner in life truly loves them; whether or not one believes they can accomplish a challenging task seems pretty trivial compared to whether or not one believes in God.  Moreover, human belief seems intimately entwined with human faith and trust.  Belief in an expected event, in the words of someone else, in the truth of ideas and/or assertions of all sorts, in anticipated future states of the world, and in the truth of past events all involve faith that the object of the belief is worthy of one’s trust.  In other words, I have faith that the resources leading me to believe in X, whatever X may be, are worthy of my trust to the extent I tell myself that X must be true; X is true to me because I have faith in the trustworthiness of believing in X.  Admittedly, this epistemological dissection of belief sounds esoteric, convoluted, and nuanced.  We do not normally think about either the hierarchy or the underlying philosophical assumptions of belief; we just believe, because we come into the world “wired” in our brain to do just that.  What I propose to do is to make thinking about belief less esoteric, convoluted, and nuanced — to make serious consideration of what it is we do when we believe more normal in day-to-day thinking.

In the context of expounding upon freedom of the press in the United States, Alexis de Tocqueville in Democracy in America (The Folio Society, London, 2002) said that a majority of US citizens reflecting upon freedom of the press “…will always stop in one of these two states:  they will believe without knowing why, or not know precisely what one must believe.” (p 179)  It seems to me any area of reflection, not just freedom of the press, could have this quote applied to it, given how muddled together “thinking” and “believing” have seemingly always been in common rational mentation.  So basic is our habit of believing without intellectual meditation and discrimination, being caught between the dilemma of the two states quoted above becomes seemingly all-to-often inevitable.  The hierarchy of importance among beliefs as well as consideration of the roles faith and trust play in belief become lost in an intellectually lazy resignation to the dilemma, in my opinion.

I think we can know why we believe.  I think we can know precisely what we must believe.  Note I did not use “I believe” to start the first two sentences of this paragraph; instead, I used “I think.”  So many thinking people tend to use “I believe” in sentences the same or similar to these and thereby fall into a trap of circular reasoning; they mean “I think,” but utter “I believe.”  I think Perception Theory can help to sort out any nuances associated with belief and point the way to how believing in things that are true is no trick at all, but, rather, a sensible mode of using our mind.  And the first two sentences of this paragraph contain strong clues as to how to relieve “I believe…” and even “I think…” statements from ambiguity.   We just simply give them reliability with the beginning words “I know…,” instead of “I believe…” or “I think…”  Herein I hope to lay out the epistemological process by which statements become reliable and thereafter merit the beginning words “I know…”  At the same time I hope to show that in the name of truth, “I believe” and “I think” should not be necessarily be thrown away, but, rather, used with reticence, care, and candor.

 

I submit that the statement “I believe the sun will appear to rise in the east tomorrow morning.” is fundamentally different from the statement “I believe in the existence of God.”  Neither is irrefutable as, presumably, the speaker cannot deliver an image of a future event, nor is anything remotely resembling a deity alongside the speaker.  According to Perception Theory, any belief statement, certainly including these two, is non-veridical (At Last, a Probable Jesus, [August, 2015]; Perception is Everything, [Jan., 2016]), as a belief is a descriptive statement of some result of the mind, imagination, and other epiphenomenal processes operating within the brain.  As shown in Perception Theory: Adventures in Ontology — Rock, Dog, Freedom, & God, [March, 2016], such statements can resonate strongly, weakly, or not at all with the real or veridical world from which comes all empirical input into the brain through the senses.  The sun rising tomorrow resonates strongly or weakly with the veridical real world (depending upon how skeptical and/or cynical the speaker is), based upon previously experienced (directly or indirectly) sunrises; in terms of Perception Theory: Adventures in Ontology — Rock, Dog, Freedom, & God, [March, 2016], it is resonating non-veridically based.  God existing is, conversely, looped non-veridically based, as defined in Perception Theory: Adventures in Ontology — Rock, Dog, Freedom, & God, [March, 2016].  The second statement is purely epiphenomenal, while the first hearkens to a real empirical world; the second is a naked product of the mind, while the first links an epiphenomenal product to a presumed reality (phenomena) outside the brain.  Belief is in both cases epiphenomenal; the first is based upon empirical, veridical, phenomenal past perceptions; the second is based upon imaginative, non-veridical, epiphenomenal intra-brain biochemical activity.  In other words, sunrises are non-veridical images based upon empirical data, while God is non-veridical imagery based upon other non-veridical imagery.

At the risk of being redundant, it bears repeating that why we have the ability to believe in the two manners illustrated by the two belief statements of the previous paragraph is easily understood.  When our brains evolved the complexity making self-consciousness possible, assuring our survival as a small group of especially big-brained members of the genus Homo, applying our new ability to imagine ourselves in situations other than the present was not practically possible at all times; we still had to react instinctively in threatening situations, without pausing to think about the situation, else we might not survive the situation.  With, say, leopards attacking our little hunter-gatherer group during the night, to question or think about alternatives to proactively defend ourselves potentially would have made the situation more dangerous, not safer; in other words, often whoever hesitated by thinking about the situation got eaten.  Those who came up with or listened to a plan of defense without argument or disagreement tended to assure the success of the plan, as the group agreed to act quickly to avoid future nights of terror; or, often acting unquestionably directly led to successfully solving the leopard problem.  To justify ourselves individually joining the plan, we used our newly complex, self-conscious minds to suspend judgement and believe that the originators of the plan of defense, whether we ourselves, the leaders of the group, the shaman of the group, or just some unspecified member of the group, had some seemingly good idea to deal with the leopard problem; without rationalization of any sort, we believed the plan would work.  Without hesitation, we often believed out of such desperation; we had no choice but to believe in some plan, to believe in something, else we might die.  Hence, those who developed the ability to unthinkingly believe tended to be those who survived in the long run.

I submit that as human beings began civilizations and culture over the last several thousand years, the need for “knee-jerk,” unthinking belief has overall diminished.  Outside of modern totalitarian political, sectarian, or secular regimes, our brains can safely be used to question, scrutinize, vet, and adjudicate ideas, plans, positions, conclusions, etc. as never before.  As knowledge continues to increase, we can without desperation hesitate and “think it over;” immediate belief is not necessary any longer in most situations.  Belief continues to be an option we all use at one time or another, but on important issues we no longer have to suspend judgement and “just believe.”  Don’t get me wrong — spouting beliefs “right and left” on issues of little or no importance, such as what I believe will be the outcome of upcoming sporting events or of the next pull on a slot machine in Las Vegas, can be fun.  What I am saying is that we do not have to agonize over what we believe, as long as the consequences of that belief portends little or nothing at all.  What this means is that we must train ourselves to start serious, important, and substantive declarations with “I think” rather than “I believe,” as I did above, which indicates some rational thought has gone into formulating those declarations.  Moreover, it portends that “I know” is even better than “I think” in that the rational thought going into “I know” statements is so substantive and evidence-based, the statement is reliable and feels close to the “truth.”   It also means we can suspend belief indefinitely, if we choose, or we never need think belief is necessary.

Admittedly, belief does have use in motivational rhetoric, which may not be so trivial in many different individual minds.  Often consensus of agreement for group action relies upon conjuring in individual minds belief that the action is in the group’s collective best interest.  Halftime speeches in the locker room by coaches to their teams is one example that comes to mind; such locker rooms rely upon words and signs exhorting belief; evidence and reflection need not be evoked.  This common use of belief hearkens back to our evolutionary need to believe, as discussed above, but today compelling emotionally-charged adrenaline in a group is more a matter of avoiding losing a game or avoiding falling short of a group goal than it is avoiding being eaten by leopards.  The outcome of the game or striving for the goal determines if the belief was fun and justified, or disappointing and misleading.  Neither outcome might seem trivial to many, but neither outcome would justify the belief conjured to be “true” or “false.”  Locker room belief shown justified or not justified by subsequent events is merely coincidence.

We can now list some characteristics about human belief:

1)  Belief is a non-veridical activity, existing in our minds as either a) resonant non-veridically based  or b) looped non-veridically based.

2)  Belief involves a denial, suspension, or avoidance of judgment, bypassing all forms of adjudication involved in rational scrutiny; it is lazy mentation.

3)  Belief has decreased in importance as culture and knowledge has increased in importance.

4)  Belief is bereft of epistemological value; just because one believes X is true does not necessarily make X true; just because one believes X is false does not necessarily make X false.

5)  Belief is an epiphenomenal, evolutionary vestige of the human mind; it has value today only as an amusing tool in trivial matters or as a rhetorical tool in matters many consider not so trivial.

6)  Beginning with “I think” rather than “I believe” is stronger, and can indicate a closer proximity to the truth, but “I think” does not evoke the confidence and reliability of “I know;” “I think” leaves room for reasonable doubt.

7)  On statements and issues of portent, they can be consistently begun with “I know” rather than “I believe” or “I think.”  Just how this is possible is to follow:

 

Knowing why we believe, we now turn to what we should believe.  Clearly, merely believing in non-trivial matters carries little weight, and is hardly worthy of consideration in epistemological discussions.  Important ideas, plans, and systems of thought do not need belief — they need rational adjudication; we no longer need say “…we need to believe in or think upon what is true;” rather, we need to say “…I know X is true beyond reasonable doubt, independent of what I may believe or think.”  So, we actually now turn to what is worthy of our thought, trusting that in future we will say, instead of “what we should believe” or “what we should think” say “what we know is true.”

Let’s say I want to unequivocally state my conviction that my wife loves me.  To say “I believe my wife loves me.” belies the fact I have lived with the same woman for 48 years and counting, as of this writing.  To say “I believe” in this case sounds like we have just fallen in love (I fell in love with her when we were sophomores in high school together.).  It sounds as if there has not been time to accumulate evidence she loves me transcendent to what I believe.  The truth of the matter is beyond belief, given the 48 years.

If I say “I think my wife loves me.” it can sound as if I may have some doubt and/or there is some evidence that I should doubt, which are/is definitely not the case.  Clearly, in my view, to say “I believe” or “I think” my wife loves me does not do the truth of the matter justice; neither is strongly reliable enough to accurately describe the case from my perspective.

So, it is the case “I know my wife loves me.”  How do I know that?  Evidence, evidence, evidence.  And I’m not talking about saying to each other everyday “I love you,” which we do, by the way.  I am talking evidence transcendent of words.  For 48 years we have never been apart more than a few days, and at night we sleep in the same bed.  For 48 years she daily does so many little things for me over and beyond what she “has” to do.  She is consistently attendant, patient, gentle, caring, and comforting; she is true to her marriage vows daily.  I’ve joked for many years that either she loves me, or she is collecting data for writing a novel about living decades with an impossible man.  Truly, love is blind.

This example illustrates the 3-step process that has come to work for me at arriving at personally satisfying truth.  I’ve even personalized the steps, naming Step 1 for my younger son Chad when he was an elementary school student; Step 2 is named for my younger granddaughter Madison, Chad’s daughter, when she was in the 3rd grade; Step 3 is named for my older granddaughter Gabriella, my older son Dan’s daughter, when she was about 3 or 4 years old.  Hence, I call the process the Chad/Madison/Gabriella Method.  The Chad/Madison/Gabriella Method, or CMGM, bypasses “I believe” and “I think” to “I know.”  Transcendent of belief or speculation, CMGM allows me to arrive at the truth; I can confidently achieve reliability, conclusions I can count on; I can and have arrived at decisions, conclusions, and positions upon which I can not only stake my reputation, I can, if necessary, stake my life.

Yet, CMGM does not provide absolute truth, the corner into which so many thinkers paint themselves.  The results of CMGM are highly probable truths, worthy of ultimate risks, as indicated above, but never can my mortal mind declare 100% certainty.  There is always the finite probability the 3-step process CMGM will yield results shown to be false with unknown and/or forthcoming evidence in the future.  The foundation of CMGM is based upon the philosophical premise of the universal fallibility of human knowledge.

How do we arrive, then, at what we know is true, realizing it really has nothing to do with our careless believing or casual thinking?  What are the “nuts and bolts” of the 3-step process CMGM?

Step 1:  When my son Chad was in elementary school, he discovered he had certain teachers to whom he could direct the question “How do you know?” when information was presented to him; for some outstanding teachers he could ask that question without the teacher becoming upset or angry.  He also discovered you could not ask that of certain family members, Sunday School teachers, or other acquaintances without upsetting them.  It is a courageous question, one conjuring in me, his father, great pride. “C,” Step 1, of the method is a universal skepticism declaring literally everything in questionable, including this very sentence.  From the simple to the profound, whenever any declaration is stated, ask “How do you know?

If no evidence is given when answering the question in Step 1, it is the same as if it was not answered at all.  Answers like “Just because…,” “I just believe…,” “I just think….,” “They say that….,” or similar vacuous retorts are no answers at all.  Or, it is possible that some evidence might be cited.  If that evidence is presented as if it should be accepted and be beyond doubt and question because of the authority or reputation of the source of the evidence, that outcome would be taken to Step 2 just like no answer at all is taken to Step 2.  Therefore, after Step 1, one either has 1) no answer or a vacuous answer or 2) cited evidence for the answer.

Step 2:  When my younger granddaughter was in the 3rd grade and I was the subject of a family conversation, she, Madison, said “Papa Doc is big on knowledge.” (Instead of being called “Granddad, Grandfather, or Grandpa, my granddaughters call me “Papa Doc.”)  In other words, gather your own evidence in response to the results of Step 1; “get your ducks in a row” or “get your shit together” or “get your facts straight.”  If you received nothing in response to executing Step 1, then decide if you want to accumulate evidence for or against the original declaration.  If you don’t, dismiss or disregard the reliability of those who made the original declaration; “reset” for the next declaration.  If you decide to accumulate evidence, it is just as if you received evidence cited in support of the original declaration.  Evidence given in Step 1 needs a search for other relevant evidence and, if you decide to respond to no evidence given in Step 1, the same search is needed.  The ability and quality of getting your “ducks/shit/facts” in a row/together/straight is directly proportional to your education (formal or not) and to the amount of personal experience you have.  “M,” Step 2, of the method is identifying reliable information as evidence for or against the declaration in Step 1; it requires not so much courage as it does effort.  Intellectually lazy persons seldom venture as far as Step 2; it requires work, time, and personal research skills whose quantity, price, and outcome are often unknown, so some courage in the form of confidence is needed to accomplish Step 2.  It is the personal challenge of every successful scholar on any level from pre-K through days on Medicare.  On some questions, such as “Should women be given equal rights as men?” or “Who were the United States’ founding fathers?” it takes but moments for me to identify the reliable information, given my long experiences reading US history.  On other questions, such as “How did Christianity originate?” or “Why did the American and French Revolutions proceed on such different paths when both were based upon similar ideals?”, it has taken me years of off-and-on reading to identify the reliable information allowing me, in my own estimation, to proceed to Step 3.

Step 3:  Way before she started school, my older granddaughter Gabriella, listening carefully to family plans casually mentioned for the next day, voluntarily said, “Actually,…..” such-and-such is going to happen.  And, she was correct, despite her extreme inexperience.  “G,” Step 3, is boldly and confidently stating the results indicated by the evidence from Step 2 applied to the original declaration in Step 1.  If the original declaration in C, Step 1, is “X,” and if the evidence from M in Step 2 is “a,b,c,d,…..,” then Step 3 is “Actually, it is not X, but, rather Y, because of a,b,c,d,…..”  Step 3 takes both confidence and courage.  In Step 3 you are “running it up a flag pole to see who salutes it;” you are taking a chance that of those who listen, no one will agree or only a few will agree, and it is almost infinitesimal that all will agree.  Step 3 exposes you to both justified and unjustified criticism.  Step 3 “thickens your skin” and, if critical feedback to your Step 3 is justified and makes sense to you, that feedback can be used to tweak, modify, or redefine Y.  Justified critical feedback possibly can change Y so that the new version is closer to the truth than the old.

Hence, the way to reliable knowledge I’m suggesting , the way to truth, is essentially an internal, personal, mental adjudication; your head is your own judge, jury, prosecution, and defense.  CMGM is suggested as a possible “instruction list” for this adjudication; CMGM works for me, but others might well find another “formula” that works better for them.  CMGM, Steps 1,2,& 3, conjure(s) X and usually change(s) X to Y, based upon a,b,c,d,…..  Y is usually closer to the truth than X, but it is possible X “passes muster” (Step 2) relatively unchanged into Step 3.  It is not unlike how reliable knowledge is accumulated mentally in all areas of science, math, and engineering.  The advantage these three areas have over CMGM is that Y MUST be successfully tested by nature, by the real world, including the “real world” of logic in our heads, and independent investigators/testers also dealing with Y must corroborate with the same independently derived results; some Y’s from CMGM might not be as easily tested, such as “Men and women can never completely understand each other.” or “A different set of universal physical laws were required to create the present set of universal physical laws.” or “At least one other universe exists along with our own.”

 

If I want to make a truth statement, I need to begin it with “I know.”  I need to have “I know” statements backed up with evidence accumulated by personal adjudication produced by mental steps similar to CMGM.  If reliable knowledge and/or truth are not germane to my statements, then I can use “I believe” or “I think,” depending on how close to being important to me these statements are; “I believe” and “I think” have little or no epistemological content.

How do I know X is true?  Chad-as-a-child makes me ask that very question.  I can say “I believe X is true,” as a knee-jerk, off-the-top-of-my-head statement, just to add to the conversational mix; I feel no need to justify it.  Challenged to justify X, Madison-as-a-child reminds me I’ve got to do some scholarly work.  With some brief, cursory thought I might say “I think X is true,” maybe with a piece of evidence ‘a,’ but neither I nor my fellow conversationalists would think such a statement has much epistemological clout worthy of truth seekers.  With Madison’s work and Gabriella’s courage and confidence I sooner or later can say “I know Y is true, to the best of my ability;”  Gabriella-as-a-child tests my intellectual acumen; I must at some time bravely state Y publically, regardless of the consequences.  In all probability X has morphed into Y thanks to the accumulated evidence ‘a,b,c,d,…..’  Y has “epistemological meat” on its “bones.”  Y has brought me closer to the truth; it is a stepping stone with which to draw even closer.

Yes, I do believe all the time in lots of things.  But I think about certain things in whose reliability I’m more confident.  However, I can know a few things in whose reliability and truth I have as much intellectual and emotional confidence as I can muster.  For me, it is better to know than to just believe or to just think.  I am drawn to what you know, not necessarily to what you believe or what you think.

RJH

 

Perception Theory: Adventures in Ontology — Rock, Dog, Freedom, & God

Development and application of perception theory (Perception is Everything, [Jan., 2016] & Perception Theory (Perception is Everything) — Three Applications, [Feb., 2016]) has opened up for me seemingly unending possibilities of understanding better almost any aspect of human knowledge and experience.  Among my favorite areas of philosophy is ontology, the philosophy of being — what is existence?, what does it mean “to be?”, etc.  Modern existentialism has sprung from ontology, now armed with human psychology, cultural anthropology, and evolutionary psychology.  Perception theory thrives upon the notion that objectivity (the veridical) and subjectivity (the non-veridical) are not “at odds,” but, rather, exist in an evolutionary symbiosis via and upon our “world-view screen of perception” within our heads (See At Last, A Probable Jesus, [August, 2015] & Perception is Everything, [Jan., 2016]).  (Another way of thinking of this screen is that it is synonymous with the German Weltanschauung.)  What this work focuses upon is the light shed upon the question “What does it mean to exist?” provided by perception theory.

For anything to exist, there must be some perception, conception, or idea of that thing on the non-veridical side of the screen — in the human mind embedded in the human brain.  I recall several years ago finding agreement with a former friend and fundamentalist Christian on this universal premise of “knowing” anything — e.g. to know God is to have certain brain activity within your mind; to know anything else is to have different brain activity within your mind.  Not having worked out perception theory at that time, I only remembered the novelty of agreement between the two of us.  I now know this novelty was but an unrecognized feeling of the compatibility of the objective and the subjective; had the symbiosis between objectivity and subjectivity been clear to me back then, our discussion would have gotten much further than it did.

The definition of existence in the first sentence of the previous paragraph must not be mistaken for an affirmation of Bishop Berkeley’s ontological “proof of God” based upon “To be is to be perceived.”  The good bishop declared that God must exist because He is the Universal Perceiver keeping the world in existence around us, even when we are not directly perceiving it, such as when we are asleep.  Perception theory declares, on the other hand, that existence creates perception, not the other way around.  Existence is a processed quality actively attributed by the non-veridical upon both the veridical (empirical data bombarding the senses) and the non-veridical (ideas generated or processed by the mind using veridical data, other non-veridical concepts, or both).  All things perceived existent either in the outside world or in our heads must be non-veridical products, even though the genesis of all things perceived lies ultimately but indirectly in prior and/or present empirical data.

 

To demonstrate all this with examples, consider the existence of four non-veridical products — the idea of a rock, of a dog, of freedom, and of God.  In other words, how does perception theory describe the existence of a rock, a dog, freedom, and God?  Four ideas are chosen in anticipation of existence falling into four distinct categories.  Perhaps other ontologists using other theories would choose another number; perhaps other ontologists using my exact same perception theory would choose another number.  Moreover, the list of possible examples representing each category is virtually endless.  No doubt every single reader would come up with a completely different list than rock, dog, freedom, and God.

First, how do we know a rock exists?  Its existence is inferred by our minds from strong, direct empirical signals sent by our senses of primarily sight and touch.  If it is a relatively small rock, we can pick it up and collect even more empirical signals; we can, for instance, measure its size and we can weigh it.  A rock does not move of any volition from within; if broken apart, and if not a geode, it seems uniformly hard and dense throughout, etc. etc.  Each rock we investigate, even if only one in our entire life, contributes to an idea of a rock that becomes a non-veridical image on our perception screen in our head, an image reinforced by subsequent direct empirical experience of any particular rock “out there,” outside ourselves; typically this subsequent empirical experience could be our picking up a rock we’ve never seen before, or someone purposely or accidentally hitting us with a thrown rock, etc.  Finally, we know a rock exists because empirical data from other human beings having to do with rocks seems to correlate with the notion that their non-veridical perception of rocks is nearly the same as our non-veridical perception of rocks.  In fact, I have never seen a human holding a rock denying it is there.  This, despite the impossibility of our ever experiencing others’ non-veridical perception, due to the subjective trap (Perception is Everything, [Jan., 2016]).  In other words, other apparent perceptions of rocks assure me I am not “making rocks up” in my own head, or “If I’m crazy to say rocks exist, then apparently almost everyone else must also be crazy!”  Beings like me also behave as if rocks exist.

[Here I pause to interject and define a useful “test” to aide in contrasting and comparing the four examples of existence (the first of which is the existence of a rock just discussed).  I am going to employ three sentences with blanks to fill in with each of the examples, one at a time. The three sentences are: 1) “_______ helps me to understand the universe better.” 2) “Wars over _______ are sometimes justified.” and 3) “I have a personal relationship with ________.”]

Let’s “test” the existence of a rock with the three sentences:  1) “A rock helps me to understand the universe better.”  That is hard to argue against (i.e. there is little or no absurdity in 1) about a rock.)  Contemplating a rock is “classic” starstuff interacting with fellow starstuff (Perception Is Everything, [Jan., 2016]).  One of my many favorite photographs of my elder granddaughter when she was a toddler is her sitting on the patio holding a fallen leaf with both hands and staring at it intently — if that is not starstuff contemplating fellow starstuff, I don’t know what is!  Just like my granddaughter left that patio so many years ago with “leaf,” apparently, as a new non-veridical concept in her brain, my holding and staring at a rock not only reinforces my catalog of non-veridical rock concepts in my brain, my understanding of the place of rocks in my universe, the universe I assume we all share, is enriched further.  So, yes, 1) about a rock seems to be clearly true.

2) “Wars over a rock are sometimes justified.”  This one seems totally absurd, as if it is a theme of a classic Monty Python skit.  There may have been a time at least a hundred thousand years ago when a group of early Homo sapiens attacked a neighboring group that had stolen the first group’s “sacred stone,” or some such, but to kill each other over a rock is today considered insanity.

3) “I have a personal relationship with a rock.”  Again, this reeks strongly Phythonesque, but at least no one is getting hurt, it is assumed.  One thinks of the absurd fad a few years ago of owning a “pet rock.”  Good fun, if one is not serious about it, but the ones who had the most fun were the sellers of pet rocks making deposits in their bank accounts.  Similar to the pet rock “relationship” is a person’s attachment to tools, equipment, houses, automobile, etc.  For instance, in the building projects I have done, I’ve grown “attached” to tools such as my Dremel-brand rotary multi-tool.  But, like a pet rock, these inanimate objects can be replaced if lost, stolen, or worn out; replacements give the same attachment as the tool they replaced.  Hence, the relationship is to any tool that can do a specific job, not to a specific one — to the idea of efficient and practical rotary tools; to emotionally attach to a worn-out tool that no longer does the job is absurd.  I “loved” the old Dremel I had to replace, but as soon as the new one “fired up,” I no longer thought about the old one — I immediately “loved” the new one.  However, I often fondly think of a 1966 red Ford Mustang I used to own and later on sold, but from the moment I sold it, I no longer had the personal relationship with that particular car — I had and still have a “love affair” with the idea of owning a red Ford Mustang, since I never replaced the one I sold.  3) speaks of a relationship with a particular rock, not with the idea of rocks in general.

Since 1), 2), and 3) for rock responses, are, respectively, “very true,” “absurd,” and “also absurd,” we can infer something about the type existence exemplified by the existence of a rock.  If I label this type existence as strongly veridically-based, as it always harkens and focuses back to the empirical, veridical source of the non-veridical concept of rocks in our heads (“rocks in our heads!” get it?……..never mind……) — namely, the universe outside our heads that we assume exists, else we would not behave the way almost all of us do and all existences conjured in the contemplation of the universe — again, anything outside our heads — is/are strongly veridically-based existence(s).  This means existing as science assumes existence to be; the existence of a rock is a scientific example of “scientific existentialism,” a basic ontological assumption of the philosophy of science.  Strongly veridically-based existence suggests that objects like the rock exist independent of our perceiving them.  We logically infer the rock existed before anyone alive today (unless it is a man-made structure like a brick recently kilned), and, long after we are gone, long after the non-veridical perceptions, conceptions, and ideas of rocks have ceased to exist inside our heads, the rock will continue to exist.  (Even if the rock erodes considerably, we normally consider it to be the same rock; we could conceive of its deliberate or accidental destruction, such as being thrown or knocked into the magma of a volcano, but most rocks seem to survive for eons of time.)  Strongly veridically-based (rock) is the first category of existence.

 

Second, how do we know a dog exists?  Most of what is said about the existence of a rock above applies to the existence of a dog, with at least one obvious difference.  That difference is the reason I chose the idea of dog as another existence example instead of lumping the canine with the rock.  That difference is best illustrated by an event that occurred not long ago in a favorite pub I frequent:  Early one afternoon in this establishment the lady co-owner walked through holding her newest family member — a puppy that looked like a wired-haired dachshund.  We all reacted as if she was carrying a new grandchild of hers; “how cute!” and similar exhortations abounded.  The evolutionary reasons we naturally respond to puppies is not germane to the point here, but imagining how different it would have been if she had walked through holding a rock is.  Had she walked through with a rock rather than a young dog, many would have not noticed at all; if they did notice, perhaps they would have dismissed the observation immediately as not noteworthy, or again if they did notice, would think it odd for the situation and would either ask her about the rock or say nothing.

It seems obvious that the difference is that the dog is alive (“quickened”) like us while the rock is not.  Being alive (being “quick”) and animate portends a brain, and a brain portends some non-veridical potential such as humans have.  (Clearly, though plants are alive, the life forms I’m here describing are animals.)  So the strongly veridically-based existence of a dog (We can empirically interact with a dog just like we do the rock.) is modified, tweaked, or nuanced slightly; it is a somewhat different kind of veridically-based existence.  I label this type existence as quickened & strong veridically-based.  Another ontological difference between a dog and a rock is that, like all living beings, there is no notion of extended prior or future existence; like humans, dogs have very limited, terminated existences compared to rocks; brains are very finite.  Quickened & strong veridically-based (dog) is the second category of existence.

1) “A dog helps me to understand the universe better.”  Again, for the same reasons as those of 1) for a rock, this seems very, very true.  Perhaps human understanding of the universe is furthered more by the dog than by the rock because we are physically closer related to dogs than rocks; a dog’s starstuff strongly reminds us of our own starstuff — both of us are mammals, etc.

2) “Wars over a dog are sometimes justified.”  Once more, unless we are talking about an imagined early, early time of Homo sapiens, this statement cannot be considered meaningful in our modern, civilized times.  Once again for 2), absurd.

So far, the three-statement test’s responses for the dog are just like the rock’s.  But a difference appears in 3):

3) “I have a personal relationship with a dog.”  Even if one has never owned a dog, one surely has observed dog owners and knows this statement has to be very true, and not absurd. We now know that just like perception theory describes a symbiotic relationship between objectivity and subjectivity, human cultural evolution now describes the symbiotic relationship between humans and their domesticated animals, especially dogs.  (Cat lovers undoubtedly would have chosen a cat instead of a dog in this work.  I have just as undoubtedly exposed myself as a dog lover.)

Summing up, 1), 2), and 3) for dog responses are, respectively, “very true,” “absurd,” and “true.”  This shows that the difference between strongly veridically-based existence and quickened & strong veridically-based existence is simply the difference between “alive” and “not alive.”  Strong veridically-based existence of these two slightly different types is firmly planted in empirical data focused upon by perception; the rock and the dog exist scientifically, or, as we say, “The rock and the dog exist.”  Anyone who seriously disagrees with this statement is a hopeless solipsist doomed to self-exile from the rest of mankind.  Also, most of mankind would find the dog more interesting and emotionally satisfying than the rock for obvious reasons; we ontologically have more in common with a dog than with a rock.  We naturally quicken the dog, not the rock.

Before we continue, keep in mind these two slightly different forms of existence, though veridically-based via being scientifically objective, have to be generated as all human knowledge — subjectively and non-veridically generated within our brains and attributed to the perceptions from our senses we label as “rock” and “dog.”  We are convinced non-veridically that rocks and dogs exist veridically.

 

Third, how do we know freedom exists?  There is nothing “out there” outside our brains that we can see, touch, smell, etc. and label it “freedom.”  There are plenty of symbols of freedom “out there” that fire our senses, to be sure, but we would never hang a giant “FREEDOM” sign around the neck of, say, the Statue of Liberty in the harbor of New York City and declare Lady Liberty equivalent to freedom; a symbol of freedom stands in for the idea, concept, or perception of freedom, reminding us what freedom is.  Freedom, then, is not only non-veridical in origin, like all knowledge and perception (and therefore a product of our imaginative, creative, and calculative capacities inside our brains), it never corresponds one-to-one to something “out there” outside our brains existing strongly veridically-based or quickened & strong verdicially-based (existing like a rock or dog).  Yet most astute observers think of freedom as a quality and/or constituent of the “real” world of the veridical.  Freedom, then, has to be linked to the veridical universe outside our brains, but not as directly as the idea of a rock or of a dog.

Perception theory suggests freedom resonates with the veridical universe outside our heads (a universe assumed, as science assumes, to exist independent of our perception) through not only objects designated as symbols of freedom (e.g. Statue of Liberty) but through observable actions and language (citizens deciding for themselves, and political speeches and books waxing long and eloquently about freedom — the latter of which are more symbols).  In other words, we say non-veridical freedom exists indirectly in the veridical real world by resonating with objects and actions that would not logically exist without the non-veridical concept of freedom in our heads, much like unseen moving air molecules cause seen leaves on a tree to move.  Remove the wind, and the leaves don’t “move in the breeze;” if freedom did not exist, we would not see different people respond differently, as if by “free choice,” to the same situation, and we would not have Thomas Jefferson’s words in the U.S. Declaration of Independence.  Freedom, then, exists as a resonating non-veridically based existence.  Resonating non-veridically based existence (freedom) is the third category of existence.

The example of freedom suggests all political, economic, artistic, and ethical theories are resonating non-veridically based.  The same goes for all scientific and mathematical theory; numbers are non-veridical constructs in our heads that resonate strongly (I don’t know an example stronger) with the veridical “real” world; mathematics is the “language of the universe;” the universe appears to us to behave mathematically, thanks to this strong resonance.  As anything non-veridically based, we make these theories up in our heads, but they are distinguished from strictly fanciful ideas by our ability to appeal to the real world of the universe and the human culture inside the universe (cite evidence, in other words) and point to objects and/or social behaviors that correlate logically with the theories in our heads, all leading to a necessary consensus in a majority of heads around us.  Without the consensus of others, resonating non-veridically based ideas remain eccentric musings, speculations, or hypotheses.  If the resonating idea did not exist, there would be no consensus evidence to cite.  The vehicle of this resonance of the non-veridical with the veridical might very well be Richard Dawkin’s “memes,” or bits of human culture that spread throughout humanity like genes or viruses or bacteria.

[We can now illustrate literally the three categories of existence so far listed.  Look at Figure 2 — A Model of the Subjectivity of Perception (The “Screen”) in Perception is Everything, [Jan., 2016].  Rocks and dogs (processed, veridical, and empirical screen results) would be drawn in the figure in a solid font, while freedom (a subjective, non-veridical, and algorithmic screen result) would be written in the figure as the word “freedom” in a “dashed font,” if I could do such using Word.  Everything on the screen is non-veridical in origin (“made up” in our heads), but the “solids” are direct products of our senses in contact with the “real world,” and the “dashed” are indirectly but firmly connected to the “real world” (idea of a horse) or not connected at all to the “real world”(idea of a unicorn).  Again, in the world of Figure 2, rocks and dogs are solid, and freedom is dashed.]

Back to our ontological “adventure,” how do freedom’s 1), 2), and 3) read?

1) “Freedom helps me understand the universe better.”  There has to be agreement to this statement, even in disagreeing minds; leaders of democracies see freedom as something to be provided for the people and despots of all ilks see freedom as something to be denied the people.  The non-veridical concept of freedom is very useful and motivating in the real, veridical world.

Speaking of the really veridical, 2) “Wars over freedom are sometimes justified.”  So much of history screams for agreement to this 2) sentence.  No need to elaborate upon how much blood has been sacrificed in wars in which somebody’s freedom was at stake.

3) “I have a personal relationship with freedom.”  Plausibly, there would be a lot of agreement here too, even in disagreeing minds.  Citizens have a positive relationship with freedom, while despots have a negative one.

Interestingly, freedom’s three responses to 1), 2), and 3) are three resounding “true’s.”  a) Could it be that a general characteristic of resonating non-veridically based existence is the absence of “absurd” from the answers to the three questions?  (Same for other ideas like freedom?) b) Is the absence of “absurd” in the answers always characteristic of any kind of non-veridically based existence, not just the resonant kind?  Take the resonant non-veridical case of “love;” I suspect that “absurd” would probably be the logical response to 2) in the case of love (all types, including eros, philos, and agape).  Imagine the insanity of making war on a group because they refused to love your group, or, conversely, because you refused to love them!  Therefore, the answer to the a) question of this paragraph is clearly “no.”  When it comes to scientific, resonating non-veridical ideas, the answer to a) is also “no,” as fighting wars over a scientific theory (whose existence is definitely resonating non-veridically based) is as absurd as the craziest Python skit. [Imagine testing somebody’s new theory in quantum mechanics by rival, skeptical departments of physics of major universities attacking the claimant’s department instead of “hashing it out” at a conference of presentation of lab data.]  Probably it is just coincidence, then, that freedom’s responses are three “true’s.”  Perhaps the proper conclusion to draw on this matter is that responses for the resonating non-veridically based (freedom) are more varied than the responses for the strongly veridically-based (rock) and the quickened & strong veridically-based (dog).  Getting ahead of ourselves, the idea of a unicorn mentioned above is clearly non-veridical and suspiciously looks non-resonating.  Answers to 1), 2), and 3) for a unicorn must contain at least one “absurd,” if not two or three, so “no” also must be the response to b).  For all possible resonant non-veridically based existences, responses 1), 2), and 3) should be “True,” “True/Absurd,” and “True,” respectively.

 

Fourth, we come to the question of God.  I use the generic “God” to include all monotheistic and polytheistic views, in order to address the views of theists, agnostics, and atheists.  If God is used in the context of a specific religion or religious philosophy, I will naturally use the God of the Judeo-Christian tradition, as this is the religious culture in which I have lived.  However, my tack in this ontological “trek” is to come up with as widely applicable conceptions as possible, so that I could just as well use “deity” instead of “God.”  So, how do we know God exists?

God exists, like the rock, dog, and freedom, as a non-veridical construct of our brain.  God is different than the other three in that God not only is not empirically verified in the “real” world outside our heads, God cannot “escape” our heads via resonance. (Symbols, words, and actions purportedly representing God’s presence can be sensed all around, but like symbols and actions for freedom, they are NOT God — if they become God to certain worshipers they are NOT ontologically God; they are idols and/or icons or rituals.)  That is, the concept of God is so epiphenomenal (a secondary, coincidental, and unintentional by-product of brain activity), there is no world-wide consistency and agreement among these symbols, words, and actions, as there are for freedom, love, or ethical behavior. The non-veridical creation of God does NOT resonate with the universe, because God is like an ultimate non-veridical heat sink or dumping ground in our minds of as much definition, blame, credit, love, mystery, origin, power, thought, etc. as we can bestow.  No resonant non-veridical existence, like the idea of freedom, is like that; resonant concepts are definitely defined and predictably correlated to specific objects and actions, not to just any and to just all objects and actions, as is the case for God.  God is said to be the answer for everything, which is absurd, as it says nothing.  God is said to be in everything, which again says nothing, as we have discovered something in everything (We call them elementary particles.), but do not worship elementary particles as God.  Therefore, the non-veridical existence of God does not resonate; it “bounces back” or loops back into the brain’s fanciful, imaginative, creative faculties.  God, then, exists as a looped non-veridically based existence, a concept perpetually defying definition out in the real world outside our heads.  God is epiphenomenalism run amuck.

God exists as Santa Claus, Satan, Heaven, Hell, Purgatory, ghosts, the Tooth Fairy, the Easter Bunny, and fairies exist in our brains, and in our brains only.  (It is possible that some, perhaps not all the non-God listings in the previous sentence are resonant and exist as resonate non-veridically based, as will be shown below.)  Theists love and atheists despise the two words “God exists” near the beginning of the previous sentence; atheists love and theists despise the entire sentence. I would speculate that agnostics would be uneasy that theists and atheists could “sort of” agree upon something as “important” as God existing.  I just may have angered all three groups!  I’m not sure any of the three would be happy for me to join their group.

Things that exists as looped non-veridically based entities in the human brain, like God and Arthur Conan Doyle’s English garden fairies, remind us of our “imaginary friends” so many of us imagined as children.  Having imaginary friends probably evolved as culturally advantageous to psychologically deal with stressful loneliness, which is a life-long problem for such social creatures as we; hermits are not the normal examples for Homo sapiens.  The modus operandi of creating imaginary friends is related to attributing human characteristics to non-human veridical and non-veridical entities.  We call this anthropomorphism or personification of phenomenon.  Personification of looped non-veridically based entities in our head is a hallmark of our epiphenomenal abilities.  Thus, Santa Claus is the personification of the very veridical altruistic behavior of giving at Christmas time; Satan is the personification of the very veridical phenomenon of human evil.  In this sense, Santa Claus and Satan very “weakly” exist, or superstitiously exist — exist as psychological “crutches” to “handle” not-so-simple observations in the real world.  Santa Claus and Satan, as superstitious personifications, enjoy in our heads the ontological label of resonate non-veridically based, as the desire to give and human evil are both very real.  But God could be seen as the superstitious personification of everything and anything, the ultimate “imaginary friend,”  or “super-friend,” if you please.  And as a looped non-veridically based entity, God could also be an “all answer” friend, the “answer” to any and all unanswerable questions.  (Recall the analogy of the ultimate heat sink — actually, functioning like an imaginary “black hole” in our head.)  It is but a short step to God being “the” answer to all we see, to being the origin and Creator of the universe, as well as our super-friend.  This is exactly what theists do; they pray to God one moment and are speechless with pious awe the next as they stare into a telescope at the clear night sky.   What a trick we do in our heads — God is not only “with us,” he/she/it is simultaneously somehow controlling the entire universe!  At one extreme God seems close to being the same as the universe (pantheism) and at the other God seems to be the perfect “person” we wish we could be (wishful narcissism).  Effortlessly swinging back and forth between these theological extremes, we don’t have to think; we only need one answer — God.

[The only way God could be added to Figure 2 in Perception Is Everything, [Jan., 2016] would be the word “God” in dashed format; there would be no world-wide consensus on any dashed object that would represent “God.”]

Thoughts applied to this “whatever and everything” looping non-veridical entity form theology, which varies and correlates with the particular culture of the brains producing the thoughts.  “Looped” is another way of saying “faith-based,” so it is easy to see that theology is a “sitting duck” destined to become toxic due to faith-based epistemology as described in Sorting Out the Apostle Paul, [April, 2012], Jesus — A Keeper, [Sept. 2015], Perception is Everything, [Jan., 2016], and Perception Theory (Perception is Everything) — Three Applications, [Feb., 2016].

Now to sentences 1), 2), and 3). 1) “God helps me understand the universe better.”  Definitely not, as “the” answer to every question is no answer at all.  There is no definition, comparison, or contrasting possible with God.  Even most theistic scientists agree here.

2) “Wars over God are sometimes justified.”  Apparently so!  As the history of Europe and the Middle East (not to mention events today in the Middle East) attest.  However, this may be the response only for today’s theists.  Today’s modern atheists would definitely say “no.”  For lack of certainty, agnostics could not justify any “holy war.”

3) “I have a personal relationship with God.”  Theists say “You ‘bet-cha’!”  Atheists say “Hell, no!”  Agnostics say “Who knows?”  The looped non-veridically based existence of God placed into 3) may very well render 3) non-applicable or nonsensical.

So, for God, the three responses in possible theism, atheism, and agnosticism “triads,” are “No!,” “Yes/No/No,” and “Yes/No/?”  (Or, to correlate with the other three sets of responses, “Absurd,” “True/Absurd,” and “True/Absurd.”)  An astounding assortment of ambiguity, to say the least.  Ontology shows us, then, that God does not exist like a rock or a dog; nor does God exist like freedom.  God exists only in our heads; we have made he/she/it up, and he/she/it is so purely epiphenomenal that he/she/it becoming even weakly veridical (becoming resonant) seems impossible, even oxymoronic.

 

We can construct the following table of ontological results of this “adventure” for convenience:

CATEGORY OF EXISTENCE                       EXAMPLE          1), 2), 3) RESPONSES

Strongly Veridically-based                         Rock                   True, Absurd, Absurd

Quickened & Strong Veridically-based   Dog                      True, Absurd, True

Resonating Non-Veridically based          Freedom           True, True/Absurd, True

Looped Non-Veridically based               God      Absurd, True/Absurd, True/Absurd

Clearly, there are two main divisions of categories — the first two are veridical and the last two are non-veridical.  This is to be expected from perception theory with its assumption of “balance” between the objective and the subjective.  The veridically-based categories of existence indicate learning about the universe and avoiding war, while the non-veridically based indicate no definite pattern except being ambiguous on war and personal relationship.  Correlation between the two “veridicals” is strong, and correlation between the two “non-veridicals” is non-existent, or, at best, really weak.  Reliability, not surprisingly, seems to lie with the universe outside us, not with that within our heads — with the two “veridicals” and with the non-veridical that resonates with the real world.  Nor is it surprising to see that if you want to know about the universe, direct the non-veridical toward the veridical in your head (Perception is Everything, [Jan., 2016]).  And, war is clearly a function of our heads, not of the universe.  In my opinion, war could also have more favorability with theists than with atheists or agnostics (Perhaps I’ve not met enough Quakers.).

The astute reader of perception theory might have noticed I’ve interchangeably used, pretty much throughout, the terms “mind” and “brain,” as if they are essentially synonymous.  They can be distinguished, but for the purposes of perception theory they obviously go together.  For completeness, let me mention their distinction:  perception theory is compatible with the idea that “mind” is an epiphenomenal by-product of the physiological complexity of the brain, mostly the complexity of those “johnny-come-latelys” of the brain, the  frontal lobes; the “mind” is an incidental effect of the complex brain, which originally evolved for survival of the species.  We needed to be cleverer than the animals competing for our resources and/or trying to eat us, so with the addition of animal protein from dead animals, our brains enlarged enough, on the average, to be just that — cleverer.  Human birth canals did not enlarge enough to “keep up,” so big-brained babies had to be born less mature than the babies of our primate cousins, chimps and gorillas.  This gave Homo sapiens a “long childhood” and child rearing to physical independence became a necessary part of developing human culture, contributing to the advancement of the “nuclear family” and necessarily cooperative groups, usually of extended kinship.  The imaginations of our “new” big brains had a long time to exercise in this long childhood — so much so, in my opinion, created imaginary concepts based upon veridical perceptions lead to a self-concept of “that which imagines,” or, the mind.  Our brains did not evolve “intentionally” to form a mind; they just happened to be complex enough to form a mind.

The astute reader also no doubt noticed that I described the looped non-veridical based concept of God in our heads as being epiphenomenal, a clear unintentional by-product of brain complexity — a product of our mind.  Perhaps I should have throughout the presentation of perception theory used the descriptor “epiphenomenal” with all non-veridical existence, both resonating and looped.  Our ideas and concepts exist as epiphenomenal products of our epiphenomenal mind.

As I began this “ontological adventure” of comparing the existence of a rock, a dog, freedom, and God as suggested by perception theory, I could see that the adventure had to end talking about theists, atheists, and agnostics.  Frankly, I did not at first see exactly where the adventure would leave me, a “perception theorist,” or “perceptionist” in relation to these three groups of thinkers.  Would I come down agreeing with one of the groups or two?  To my surprise, perception theory both agrees and disagrees with all three.  God exists all right, which makes the theists glad but the atheists furious (agnostics would not like this certainty of God’s existence), but God exists confined in our heads as, again, “epiphenomenalism run amuck” — a dashed word on the perception screen of our mind — as a Grand Answerer, or super-friend so super we don’t have to struggle with where we and the universe came from, as God also is the answer to that also; he/she/it is not only the Grand Answerer and Grand super-friend, he/she/it is also the Grand Creator.  God is all we need in one Grand Epiphenomenal Package, saving us from having to mentally struggle, think, and/or worry.  God only being in our heads infuriates the theists and delights the atheists (and again is too certain for agnostics).

Perception theory, then, in a way, makes the clashes, conflicts, debates, and ill feelings among theists, atheists, and agnostics seem rather silly.  The differences among them are interesting, but not worth fighting over.  Taking my cue from Arian Foster, NFL running back formally with the Houston Texans, who is the only NFL player I know to have the courage to “come out” in favor of freethinking amidst a locker room and overall profession teeming with theism, Arian says it is better to have friendly, respectful dialogue about religious beliefs than trying to convert each other.  He is, in addition to being a free agent as of this writing, in my books a perfect candidate for being called a perceptionist.

 

Finally, I want to establish that despite a lot of correlations with perception theory in Richard Dawkins’ The God Delusion (2006, Houghton, Mifflin, Harcourt, New York, NY, ISBN 978-0-618-91824-9 (pbk.) or 0-618-91824-8 (pbk.)), I had developed perception theory before I read this book, and this book was written about a decade before my perception theory.  I am delighted at these independent correlations, as I’ve met Dr. Richard Dawkins personally and spent a few hours with him one-on-one, in which we did NOT discuss our religious positions.  I consider him a friend of casual acquaintance, but it is possible he has no recollection of meeting me.  I met him years ago as part of the cast of a BBC film featuring Richard that was part of the debunking of creationist fossilized “mantrack” claims along the Paluxy River near my home in Texas; my role was the “intrepid amateur paleontologist (with son),” among many amateur and professional scientists, who were showing evidence these claims had no scientific merit whatsoever. (See Creationism and Intelligent Design — On the Road to Extinction, [July, 2012])  I recommend all Dawkins’ books to the readers of perception theory.  The God Delusion presents the case for atheism very well for theists, atheists, and agnostics; I can only hope my presentation of the case for perception theory does something similar for all three groups.  I agree with Arian Foster: I hope in future to have meaningful, respectful, and friendly dialogue among all three groups, during which I’d love to renew my acquaintance with Richard Dawkins and start one with Arian Foster.

[Incidentally, the BBC film done along the Paluxy River, entitled “God, Darwin, and the Dinosaurs,” was so “controversial” in the U.S., it was never aired on TV’s “NOVA” PBS scientific series.  It was, however, shown in Britain (I think) and Canada.  I got to see it only because a Canadian friend of mine mailed me a VCR videotape copy he recorded off his TV!  I can only hope that public scientific sensibilities in the U.S. are now less “medieval” than then.]

RJH

 

Perception Theory (Perception is Everything) — Three Applications

In the presentation of a theory of human existence, Perception is Everything [Jan., 2016], it was suggested the theory could be applied to almost every aspect of human experience.  The model paints the picture of the objective/subjective duality of human existence as the interactive dual flow (or flux) of real-world, empirical, and veridical data bombarding our senses and of imaginative, conceptual, and non-veridical data generated by our mind, all encased within the organ we call the brain.  The two sides of the duality need not be at odds, and both sides are necessary; the objective and the subjective are in a symbiotic relationship that has evolved out of this necessity; what and who we are simultaneously exist because of this symbiosis that dwells in the head of every human individual.  No two humans are alike because no two symbioses in two brains are alike.

This post is to briefly demonstrate how the perception model of Perception is Everything [Jan., 2016] can be use to contribute insights into I. Development of Self-Consciousness in a Human Infant, II. Education, and III. The Origin of Politics.

 

I. Development of Self-Consciousness in a Human Infant – That the human mind has the ability to develop a concept of “self,” as opposed to “others,” is commonly seen as fundamentally human.  It might not be unique to our species, however, as we cannot perceive as do individuals of other species.  Often pet owners are convinced their dog or cat behaves as if it is aware of its own individuality.  But that might be just too much anthropomorphism cast toward Rover or Garfield by the loving owners.  So fundamental is our self-consciousness, most views would assert its development must commence just after birth, and my perception theory is no exception.

The human baby is born with its “nature” genetically dealt by the parents and altered by the “nurture” of the quality of its gestation within the mother’s womb (or within the “test tube” early on or within the artificial womb of the future).  The world display screen in the head of the baby (Perception is Everything [Jan., 2016]) has to be primitive at birth, limited to whatever could bombard it veridically and non-veridically while in the womb (Can a baby sense empirical data? Can a baby dream?  Are reflex movements of the fetus within her which the mother can feel before birth recorded in the memory of the fetus?)  Regardless of any answers to these questions, perception theory would describe the first moments after the cutting of the umbilical cord as the beginning of a “piece of star-stuff contemplating star-stuff all around it” Perception is Everything [Jan., 2016].  The event causing the baby to take its first breath begins the lifelong empirical veridical flux entering one “side” of the baby’s world display screen, triggering on the other “side” of the screen an imaginative non-veridical flux from the other “side.”  The dual flux has begun; the baby is “alive” as an individual, independent of the symbiosis with its mother’s body; its life as a distinct person has begun.

The unique “long childhood” of Homo sapiens (due to the size-of-the-birth-canal/size-of-the-baby’s-skull-after-9-months’-gestation consideration), the longest “childhood” of any species before the offspring can “make it on its own” —  a childhood necessarily elongated, else we would not be here as a species today — assures the world display screen is so primitive that the first few days, weeks, and months of each of us are never remembered as our memory develops on the non-veridical side of the screen.  It takes a while for memory generated from the empirical veridical flux to be able to create a counter flow of imaginative non-veridical flux back to the screen. Perception is Everything [Jan., 2016] indicates the dual flow is necessary for the screen to become “busy” enough to be noticed by the “mind’s eye,” that within us that “observes” the screen.  No doubt all of us first had our screens filled by perceptions of faces of caretakers (usually dominated by our mother’s face) and sensations of sound, touch, smell, and taste as our bodies adapted to the cycles of eating, eliminating, and sleeping.  Waking hours during which we were doing none of these, we began to focus on the inputs of our senses.  These are the indicators we inevitably process non-veridically how we are aware of these inputs; and just as inevitably we at some point become aware of a “perceiver,” an observer of these inputs; we have an idea of “something” is perceiving, that “something” is relating to our caretaker(s) (whose face(s) we always feel good seeing), and that “something” is us.  In each individual, the development of a subjective “I” is normally “there” in the head in a few months (exact time interval different, probably, for each individual); a distinction between “me” and “not-me” begins.  This distinction is self-consciousness in-the-making, or “proto-self-consciousness.”

That distinction between “me” and “not-me” is vital and fundamental for each piece of star-stuff beginning to contemplate his or her “fellow” star-stuff — contemplation that is constantly painting an increasingly complex world display screen inside his or her head.  Early on, anything that “disappears” when eyes are closed is “not-me;” anything that is hungry, that likes things in a hole below the eyes to quench that hunger, that experiences discomfort periodically way below the eyes, and that feels tactile sensations from different locales in the immediate vicinity (through the skin covering all the body as well as the “hole below,” the mouth) is “me.”  Eventually, “me” is refined further to include those strange appendages that can be moved at will (early volition) and put into the hunger hole below the eyes, two of which are easy to put in (hands and fingers) and two of which are harder to put in (feet and toes).  That face that seems to exist to make “me” feel better and even happy turns out to be part of “not-me” and it becomes apparent that much of “not-me” does not necessarily make “me” feel better, but are interesting nonetheless.  Reality is being sorted out in the young brain into that which is sorted and that which sorts, the latter of which is the “mind’s eye,” self-consciousness.

In time, “me” can move at will and that which can move thus is the “housing” and boundary limiting “me.”  As soon as the faces “me” can recognize are perceived that they represent other “me’s,” then the distinction between “me” and “you” begins, soon followed by “me,” “you,” and “them.”  Some “you’s” and “them’s” don’t look like other “you’s” and “them’s,” such as household pets.  Still other “you’s” and “them’s” don’t move on their own like “me, soon to be ‘I'” does, such as dolls and stuffed animals.  “You’s” and “them’s” separate into two catagories — “alive” and “not-alive.”  As quantity becomes more a developed concept, it soon becomes apparent that there are outside “me” more “not-alives” than “alives;” “not-alives” soon are called “things” and “alives” take on unique identities by learning to recognize and later speak names.  Things are also non-veridically given names, and the genetic ability to quickly learn language “kicks in,” as well as the genetic ability to count and learn math.  In a few months’ time, existence for “me” has become both complex and fixating to its mind/brain, and growing at an increasing rate (accelerated growth).  The name non-veridically given to “me” is the subjective “I” or the objective “myself” — both of which are understood to be self-consciousness.

This clearly is an approach similar to a psychology of infants, which might deal eventually with the development of the ego and the id.  This approach using perception theory allows a seamless tracing of the development of the human mind back before birth, employing a more objective approach to talking about subjectivity than possessed by some other psychological approaches; it is an approach based upon evolutionary psychology.  In addition, it is clear that the emergence of self-consciousness according to perception theory demands a singular definition of the “self” or of “I” or of “myself,” in order to avoid the problems of schizophrenia and its multiple personalities.  Perhaps the widespread phenomenon of children making up “imaginary friends” is an evolved coping mechanism in the individual child’s imagination to order to avoid schizophrenia; an imaginary friend is not the same as the self-consciousness producing such a “friend.”  Just like the individual brain, self-consciousness is singularly unique, in ontological resonance with the brain.

 

II.  Education – Perception theory is compatible with the idea of what education should be.  Education is not a business turning students into future consumers; education is not a sports team turning students into participants; education is not training to turn students into operators of everything from computer keyboards to spaceship control panels.  Instead, education is but the development of students’ minds (1. Education Reform — Wrong Models! [May, 2013], 2. Education Reform — The Right Model [May, 2013], 3. Education Reform — How We Get the Teachers We Need [May, 2013], & Top Ten List for Teachers of HS Students Preparing for College or University (Not a Ranking) — A List for Their Students, Too! [Dec., 2014]).  The word “but” here is somewhat misleading, as it indicates that education might be simple.  However, education is so complex that as yet we have no science of education (#1 on the “List” in Top Ten List for Teachers of HS Students Preparing for College or University (Not a Ranking) — A List for Their Students, Too! [Dec., 2014]).  Perception theory indicates why education is so complex as to defy definition and “sorting out,” Defining education is like the brain trying to define its own development, or, like a piece of star-stuff trying to self-analyze and contemplate itself instead of the universe outside itself.  At this writing, I am inclined to say that a more definitive sorting out of what education is and how it is accomplished inside individual brains is not impossible, as an individual seeing his/her own brain activity is impossible, or, as another person seeing my subjective world display screen in my head is impossible (the “subjective trap”) [Perception is Everything [Jan., 2016]].

Following this optimistic inclination, education is seen as developing in individual brain/minds a continuous and stable dual flow of veridical flux and non-veridical flux upon the individual’s world display screen (Perception is Everything [Jan. 2016]).  A “balance” of this dual flow in Perception is Everything [Jan., 2016] is seen as a desired “mid-point” of a spectrum of sanity, the two ends of which denote extreme cases of veridical insanity and non-veridical insanity.  Therefore, the goal of education is to make the probability of becoming unbalanced and away from this mid-point in either direction as small as possible; in other words, education attempts, ideally, to make in the student’s mind the concentration and focusing of the non-veridical upon the veridical as much as possible.  The non-veridical vigor of “figuring out” the veridical from “out there” outside the brain is matched by the vigor of the empirical bombardment of that same veridical daily data.  Making this focus a life-long habit, making this focus a comfortable, “natural,” and “fun” thing for the non-veridical mind to do for all time is another way to state this goal of education.  Defining education in this manner seems compatible and resonate with the way our mind/brain seems to be constructed (with the necessary duality of the objective and the subjective); our mind/brains seem evolved to be comfortable with being at the mid-point without struggling to getting or staying there; self-educated individuals are those fortunate enough to have discovered this comfort mostly on their own; graduates of educational institutions who become life-long scholars have been guided by teachers and other “educators” to develop this “comfort zone” in their heads.  Education, in this sense, is seen as behaving compatibly with the structure of the brain/mind that has assured our survival as a species over our evolution as a species.  In order to successfully, comfortably, and delightfully spend our individual spans of time in accordance to the evolution of our mind/brains, we must live a mental life of balance of the two fluxes; education, properly defined and thought upon in individual mind/brains, assures this balance, and therefore assures lives of success, comfort, and delight.  He/she who is so educated uses his/her head “in step” with the evolution of their head.

We evolved not to be religious, political, or artistic; we evolved to be in awe of the universe, not to be in awe of the gods, our leaders, or our creations.  We evolved not to be godly, patriotic, or impressive; we evolved to survive so that our progeny can also survive.  Religion, politics, and the arts are products of our cultural evolution invented by our non-veridical minds to cope with surviving in our historical past.  In my opinion these aspects of human culture do not assure the balance of the two fluxes that maximize the probability of our survival.  Only focusing upon the universe of which we are a part will maximize that probability — thinking scientifically and “speaking” mathematically, in other words.  Education, therefore, is properly defined as developing the scientifically focused mind/brain; that is, developing skills of observation, pattern recognition, mathematical expression, skepticism, imagination, and rational thinking.  But it is not an education in a vacuum without the ethical aspects of religion, the social lessons of political science and history, and the imaginative exercises of the arts.  In this manner religious studies, social studies, and the fine arts (not to mention vocational education) all can be seen as ancillary, participatory, and helpful in keeping the balance of the two fluxes, as they all strengthen the mind/brain to observe, recognize, think, and imagine (i.e. they exercise and maintain the “health” of the non-veridical).  I personally think non-scientific studies can make scientific studies even more effective in the mind/brain than scientific studies without them; non-scientific studies are excellent exercises in developing imagination, expression, senses of humor, and insight, attributes as important in doing science as doing non-science.  The “well-rounded” scholar appreciates the role both the objective and the subjective play in the benefit of culture better than the “specialist” scholar, though both types of scholars should understand that the focus of all study, scientific or not, should be upon the veridical, the universe “out there.”  Not everyone can development their talents, interests, and skills in the areas of science, math, engineering, and technology, but those who do not can focus their talents, interests, and skills toward toward developing some aspect of humanity-in-the-universe — toward exploring the limitless ramifications of star-stuff in self-contemplation.

Therefore, education, Pre-K through graduate school, needs a new vertical coordination or alignment of all curricula.  ALL curricula should be taught in a self-critical manner, as science courses are taught (or should be taught if they are not).  An excellent example of what this means was the list of philosophy courses I took in undergraduate school and graduate school.  Virtually all the philosophy courses I took or audited were taught in a presentation of X, of good things about X, and of bad things about X sequence.  In other words, all courses, regardless of level, should be taught as being fallible, not dogmatic, and subject to criticism.  A concept of reliable knowledge, not absolute truth, should be developed in every individual mind/brain so that reliability is proportional to verification when tested against the “real world,” the origin of the veridical flux upon our world display screen; what “checks out” according to a consensus of widely-accepted facts and theories is seen as more reliable than something that is supported by no such consensus.  Hence, the philosophy of education should be the universal fallibility of human knowledge; even the statement of universal fallibility should be considered fallible.  Material of all curricula should be presented as for consideration, not as authoritative; schools are not to be practitioners of dogma or propagators of propaganda.  No change should occur in the incentive to learn the material if it is all considered questionable, as material continues often to be learned in order to pass each and every course through traditional educational assessment (tests, exams, quizzes, etc.).  And one does not get diplomas (and all the rights and privileges that come with them) unless one passes his/her courses.  Certainly the best incentive to learn material, with no consideration of its fallibility other than it’s all fallible, is the reward of knowing for its own sake; for some students, the fortunate ones, the more one knows, the more one wants to know; just the knowing is its own reward.  Would that a higher percentage of present and future students felt that way about what they were learning in the classroom!

The “mantra” of education in presenting all-fallible curricula is embodied in the statement of the students and for the students.  Institutions of learning exist to develop the minds of students; socialization and extracurricular development of students are secondary or even tertiary compared to the academic development of students, as important as these secondary and tertiary effects obviously are.  As soon as students are in the upper years of secondary schooling the phrase by the students should be added to the other two prepositional phrases; in other words, by the time students graduate from secondary schools, they should have first-hand experience with self-teaching and tutoring, and with self-administration through student government and leadership in other student organizations.  Teachers, administrators, coaches, sponsors, and other school personnel who do not do what they do for the sake of students’ minds are in the wrong personal line of work.

Educational goals of schools should be the facilitation of individual student discovery of likes, dislikes, strengths, weaknesses, tastes, and tendencies.  Whatever diploma a student clutches should be understood as completing a successful regimen of realistic self-analysis; to graduate at some level should mean each student knows his/herself in a level-appropriate sense; at each level each student should be simultaneously comfortable with and motivated by a realistic view of who and what he/she is.  Education should strive to have student bodies free of “big-heads,” bullies, “wall-flowers,” and “wimps.”  Part of the non-academic, social responsibility of schools should be help for students who, at any level, struggle, for whatever reason, in reaching a realistic, comfortable, and inspiring self-assessment of themselves.  Schools are not only places where you learn stuff about reality outside the self, they are places where you learn about yourself.  Students who know a lot “outside and inside” themselves are students demonstrating the two fluxes upon their world display screen in their heads are in some sense balanced. (1. Education Reform — Wrong Models! [May, 2013], 2. Education Reform — The Right Model [May, 2013], 3. Education Reform — How We Get the Teachers We Need [May, 2013],  Top Ten List for Teachers of HS Students Preparing for College or University (Not a Ranking) — A List for Their Students, Too! [Dec., 2014], & Top Ten List for Teachers of HS Students Preparing for College or University (Not a Ranking) — A List for Their Students, Too! [Dec., 2014])

Consequently, the only time education should be seen as guaranteeing equality is at the beginning, at the “start-line” the first day in grade K.  Education is in the “business” of individual development, not group development; there is no common “social” mind or consciousness — there is only agreement among individual brain/minds.  Phrases like “no child left behind” has resulted in overall mediocrity, rather than overall improvement.  Obviously, no group of graduates at any level can be at the same level of academic achievement, as each brain has gained knowledge in its own, unique way; some graduates emerge more knowledgeable, more talented, and more skilled than others; diverse educational results emerge from the diversity of our brain/minds; education must be a spectrum of results because of the spectrum of our existence, our ontology, of countless brain/minds.  Education, therefore, should be seen as the guardian of perpetual equal opportunity from day 1 to death, not the champion of equal results anywhere along the way.

[Incidentally, one of the consequences of “re-centering” or “re-focusing” the philosophy, the goals, and the practices of education because of perception theory may be a surprising one.  One aspect of a scientific curriculum compared to, say, an average “humanities” curriculum, is that in science,, original sources are normally not used, unless it is a history and philosophy of science course (Is history/philosophy of science a humanities course?).  I am ending a 40-year career of teaching physics, mostly the first-year course of algebra-based physics for high school juniors and seniors, and, therefore, ending a 40-year career introducing students to the understanding and application of Isaac Newton’s three laws of motion and Newtonian gravitational theory.  Never once did I ever read to my physics students, nor did I ever assign to my physics students to read, a single passage from Philosophiae Naturalis Principia Mathematica, Newton’s introduction to the world of these theories.  Imagine studying Hamlet but never reading Shakespeare’s original version or some close revised version of the original!

The reason for this comparison above is easy to see (but not easy to put in few words for me):  science polices its own content; if nature does not verify some idea or theory, that idea or theory is thrown out and replaced by something different that does a better job of explaining how nature words.  At any moment in historical time, the positions throughout science are expected to be the best we collectively know at that moment.  Interpretations and alternative views outside the present “best-we-know” consensus are the right and privilege of anyone who thinks about science, but until those interpretations and views start making better explanations of nature than the consensus, they are ignored (and, speaking as a scientist, laughed at).

Though many of the humanities are somewhat more “scientific” than in the past — for instance, history being more and more seen as a forensic science striving to recreate the most reasonable scenes of history — they are by definition focused on the non-veridical rather than the veridical.  They are justified in education, again, because they aid and “sharpen” the non-veridical to deal with the veridical with more insight than we have done in the past.  The problems we face in the future are better handled with not only knowledge and application of science, math, engineering, and technology but also with knowledge of what we think about, of what we imagine, of the good and bad decisions we have made collectively and individually in the past, and of the myriad of ways we can express ourselves, especially express ourselves about the veridical “real” world.  Since the original sources of these “humanities” studies are seen as applicable today as they were when written, since they, unlike Newton, were not describing reality, but only telling often imaginative, indemonstrable, and unverifiable stories about human behavior to which humans today can still relate, the original authors’ versions are usually preferred over modern “re-hashes” of the original story-telling.  The interest in the humanities lies in relating to the non-veridical side of the human brain/mind, while the interest in the sciences lies in the world reflecting the same thing being said about it; Newton’s laws of motion are “cool” not because of the personality and times of Isaac, but because they appear to most people today “true;” Hamlet’s soliloquies are “cool” not because they help us understand the world around us, but because they help us understand and deal with our non-veridical selves, which makes their creator, Shakespeare, also “cool;” the laws of motion, not Newton, are today relevant, but Shakespeare’s play is relevant today because in its original form it leads still to a myriad of possibly useful interpretations.  What leads to veridical “truth” is independent of its human source; what leads to non-veridical “stories” is irrevocably labeled by its originator.

To finally state my bracketed point on altered education as begged above the opening bracket, science, math, and engineering curricula should be expanded to include important historical details of scientific ideas, so that the expulsion of the bad ideas in the past as well as the presentation of the good ideas of the present are included.   Including the reasons the expunged ideas are not part of the curriculum today would be the “self-critical” part of science courses.  Science teachers would be reluctant to add anything to the curriculum because of lack of time, true enough, but the clever science teacher can find the few seconds needed to add by being more anecdotal in their lessons, which would require them to be more knowledgeable of the history and philosophy of science.  Hence, all the curriculum in education suggested by perception theory would be similar — cast in the universal presentation of X, of good things about X, and of bad things about X mold.]

 

III.  The Origin of Politics (The “Toxic Twin”) – Perception is Everything [Jan., 2016] makes dealing with human politics straightforward, in that politics not only originated, in all likelihood, just as religion and its attendant theology originated, it has developed along the same lines as theology so similarly that politics could be considered the “toxic twin” of theology, in that it can turn as toxic (dangerous) to humanity as theology can turn. (Citizens! (I) Call For the Destruction of the Political Professional Class [Nov., 2012], Citizens! (II) The Redistribution of Wealth [Jan., 2013], Citizens! (III) Call for Election Reform [Jan., 2013], The United States of America — A Christian Nation? [June, 2012], An Expose of American Conservatism — Part 1 [Dec., 2012], An Expose of American Conservatism – Part 2 [Dec., 2012], An Expose of American Conservatism — Part 3 [Dec., 2012], Sorting Out Jesus [July, 2015], At Last, a Probable Jesus [Sept., 2015], & Jesus — A Keeper [Sept., 2015]) In order for us to survive in our hunter-gatherer past, leaders and organizers were apparently needed as much as shamans, or proto-priests; someone or a group of someones (leader, chief, council, elders, etc.) had to decide what would be the best next thing for the collective group to do (usually regarding the procuring of food for the group’s next eating session or regarding threats to the group from predators, storms, or enemy groups over the next hill, etc., etc.,); just as someone was approached to answer the then unanswerable questions, like where the storms come from and why did so-and-so have to die, leaders of the group were looked to for solving the group’s practical and social problems.  In other words, politics evolved out of necessity, just like religion.  Our non-veridical capabilities produced politics to meet real needs, just as they produced religion to meet real needs.

But, just as theology can go toxic, so can politics and politics’ attendant economic theory.  Voltaire’s statement that those who can make you believe in absurdities can make you commit atrocities applies to political and economic ideology just like it does to gods and god stories.  Anything based purely upon non-veridical imagination is subject to application of Voltaire’s statement.  However, I think politics has an “out” that theology does not.  Theology is epistemologically trapped, in that one god, several gods, or any god story cannot be shown to be truer (better in describing reality) than another god, other several gods, or another god story.  Politics is not so trapped, in my opinion, as it does not have to be “attached at the hip” with religion, as has been demonstrated in human history since the 18th century.  Politics can be shown to be “better” or “worse” than its previous version by comparing the political and social outcome of “before” with “after.”  No political solution solves all human problems, if for no other reasons than such problems continually evolve in a matter of weeks or less, and, no political installment can anticipate the problems it will encounter, even when it has solved the problems of the “before.” Nonetheless, I think one can argue that the fledgling United States of America created by the outcome of the American Revolution and the birth of the U.S. Constitution was better than the colonial regime established in the 13 colonies by the reign of George III.  The same can be said about the independent nations that emerged peacefully from being commonwealths of the British Empire, like India, Canada, and Australia, though the USA, India, Canada, and Australia were and are never perfect and free from “birth pangs.”

What are the political attributes that are “better” than what was “before?”  Many of the references cited just above point out many of them, a list I would not claim to be complete or sufficient.  Overall, however, the history of Western and Eastern Civilization has painfully demonstrated, at the cost of spilling of the blood of millions (Thirty Years’ War, Napoleonic Wars, World War I, World War II, etc.) that theocracies and monarchies are “right out.”  [Here I am applying the philosophy that history is not so much a parade of great individuals, but, rather, is more apply seen as a parade of great ideas — a parade of non-veridical products much better than other such products.]  Democracies only work for small populations, so a representative form of government, a republic, works for larger populations of the modern world.  Clearly, secular autocracies and dictatorships are also “right out.”  Class structure of privilege and groundless entitlement still rears its ugly head even in representative republican governments in the form of rule-by-the-few of power (oligarchies) and/or wealth (plutocracies).  To prevent oligarchies and plutocracies, elected representative government officials should be limited in how long they can serve so that they cannot become a political professional class (limited terms of office); in other words, politicians should be paid so that they cannot make a profit.

[Almost the exact same things can be said of government work staffs and other non-elected officials — the bureaucrats of “big government.”  Terms of service should be on a staggered schedule of limitations so that some “experience” is always present in both the elected and their staffs; bureaucrats should be paid in order that they cannot become a professional class of “bean-counters” at tax payer expense; public service should be kept based upon timely representation, and civil service should be kept based upon a system of timely merit; politicians are elected by voters, and bureaucrats are selected by civil service testing — both groups subject to inevitable replacement.]

This, in turn, calls for severe restrictions on lobbying of elected officials of all types (making lobbying a crime?).  Preventing oligarchies and plutocracies of any “flavor” can only be effective if the overall political philosophy applied is a liberal one (“liberal” meaning the opportunity to achieve wealth, power, and influence while simultaneously working so that others around you (all over the globe) can achieve the same, all without the unjust expense to someone else’s wealth, power, and influence).  The philosophy of such a liberal posture I call “liberalist,” meaning that freedom, equality, and brotherhood (the liberte, egalite, and fraternite of the French Revolution) are all three held constantly at equal strength.  When one or two of the three are reduced at the relative boosting of two or one, respectively, then things like the atrocities of the French Terror, the atrocities of fascism, the atrocities of communism, or the atrocities of unregulated capitalism result.

[The word “equality” in political philosophy as used above must be distinguished from the “equality” issue of education in II. above.  When the US Constitution speaks of “all men are created equal,” that does not mean equal in knowledge, talents, and skills; rather it means a shared, universal entitlement to basic human rights, such as, in the Constitution’s words, “life, liberty, and the pursuit of happiness.”  We all have equal rights, not equal educational results; equal rights does not mean equal brain/minds — something the Terror tragically and horribly did not grasp; equal rights to education does not mean equal knowledge, talents, and skills for graduates — something too many “educators” tragically do not grasp.  Perception theory would suggest political equality is different from educational equality; the word “equality” must be understood in its context, if the appropriate adjective is not used with the noun “equality.”  The difference is crucial; political equality is crucial to the healthy social organization of the species, while educational equality (equal results, not equal opportunity) is tragic and harmful to the individual brain/minds of the species.  Awareness of this difference, or always making this semantic distinction, should avoid unnecessary confusion.]

Certain Western European countries, such as the Scandinavian countries, have shown the future of political systems toward which all nations should strive in accordance to liberal, liberalist views.  If anything is needed by the population at large, then a socialist program is called for to deal with all fairly — such as social security, free public education through university level, postal service, public transportation, universal single-payer health care, public safety, state security, and “fair-share” taxation of all who earn and/or own.  No one is allowed to achieve personal gain through regulated capitalism or through leadership in any of these socialist programs except upon merit, meaning his/her gain (in wealth, power, and/or influence) is not at the unjust loss of someone else, and is based solely upon the successful individual’s talents, skills, and knowledge; competition in capitalism and program leadership is both necessary and in need of limitations. It is OK to “lose” in the game of capitalism, as long as one loses “fair and square;” every business success and every business failure must be laid at the feet of the entrepreneur.  The political system with its social programs is merely the crucible of both individual success and individual failure, continually monitoring and regulating the crucible so as to assure perpetual and equal opportunity for all.  Regulation of the political system crucible is achieved by electors of political leadership and program leadership — regulation keeping the programs, like capitalism, perpetually merit-based, fair, and just.  This is a system of “checks and balances” toward which every political system should strive.

History has taught us that the foregoing is not a description of some “pie-in-the-sky” Utopia; it is a description of what history has painfully taught us as “the way” of avoiding a theology-like toxicity for politics.  Politics is not doomed to be theology’s “toxic twin;” it will be so doomed if the bloody lessons of its past are not heeded.  In my opinion, it really is not complicated: it is better to liberally trade, tolerate, and befriend than to conservatively exploit, distrust, and demonize.  Politically speaking, we need to securely develop a xenophilia to replace our prehistoric and insecure xenophobia.  This “xeno-development” is one of the great lessons taught by the modern world over the last 300 years, and this “xeno-development” is begged by perception theory.

RJH

 

Perception Is Everything

Recently a model of human perception has occurred to me. Perception is like that “screen” of appearance before us in our waking hours that is turned off when we are asleep. Yet, it appears to us it does not really turn off during slumber when we remember dreams we have had before we awoke. The moments just before we “nod off” or just as we awake seem as times when perception is “half-way” turned on. The “fuzziness” of this “half-way switch” is clearly apparent in those mornings we awake and momentarily do not know the location of exactly where we slept.

 

Say I am sitting in an enclosed room with a large card painted uniformly with a bright red color. Focusing upon only my visual sensation, suppressing the facts I am also sensing the tactile signals of sitting in a chair with my feet on the floor as well as peripherally seeing “in the corner of my eye” the walls and other features of the room, I am only visually observing the color “red,” all for simplicity. Light from the card enters my eyes and is photo-electrically and electro-chemically processed into visual signals down my optic nerve to the parts of my brain responsible for my vision. The result of this process is the perception of the color “red” on the “screen” of my perception. If I were to describe this perception to myself I would simply imagine the word “red” in my head (or the word “red” in some other language if my “normal” spoken language was not English); were I to describe this perception to someone else in the room, say, a friend standing behind me, I would say, “I am seeing the color red,” again in the appropriate language.

Yet, if my friend could somehow see into my head and observe my brain as I claimed to be seeing red, that person would not experience my sensation or perception of “red.” He/she would see, perhaps with the help of medical instrumentation, biochemical reactions and signals on and in my brain cells. Presumably when I perceive red at a different moment in time later on, the observer of my brain would see the same pattern of chemical reactions and bio-electrical signals.

 
On the “screen” of my perception, I do NOT see the biochemistry of my brain responsible for my perception of red; were I to observe inside the head of my friend in the room while he/she was also focusing on the red card, I would NOT see his/her “screen” of perception, but only the biochemical and bio-electrical activity of his/her brain. It is IMPOSSIBLE to experience (to perceive) both the subjective perception of red and observe the biochemistry responsible for the same subjective perception within the same person. We can hook up electrodes to our own head to a monitor which we observe at the same time we look at red, but we would only be seeing just another representation of the biochemistry forming our perception, not the biochemistry itself, as well as perceiving the red perception. I call this impossibility “the subjective trap.”

 
And yet, my friend and I make sense of each of our very individual impossibilities, of each of our very personal subjective traps, by behaving as if the other perceives red subjectively exactly the same, and as if our biochemical patterns in our respective brains are exactly the same. We are ASSUMING these subjective and biochemical correlations are the same, but we could never show this is the case; we cannot prove our individual perceptions in our own head are the same perceptions in other heads; we cannot ever know that we perceive the same things that others around us perceive, even if focusing upon the exact same observation. The very weak justification of this assumption is that we call our parallel perceptions, in this scenario, “red.” But this is merely the learning of linguistic labels. What if I were raised in complete isolation and was told that the card was “green?” I would say “green” when describing the card while my friend, raised “normally” would say “red.” (Note I’m stipulating neither of us is color blind.) Such is the nature of the subjective trap.

 
[If one or both of us in the room were color-blind, comparison of visual perceptions in the context of our subjective traps would be meaningless — nothing to compare or assume. In this scenario, another sensation both of us could equally perceive, like touching the surface of a piece of carpet or rubbing the fur of a cute puppy in the room with us, would be substituted for seeing the color red.]

 
The subjective trap suggests the dichotomy of “objective” and “subjective.” What we perceive “objectively” and what we perceive “subjectively” do not seem to overlap (though they seem related and linked), leading to a separation of the two adjectives in our culture, which has a checkered history. Using crude stereotypes, the sciences claim objectivity is good while subjectivity is suspect, while the liberal arts (humanities) claim subjectivity is good while objectivity is ignorable. Even schools, colleges, and universities are physically laid out with the science (including mathematics and engineering) buildings on one end of the campus and the liberal arts (including social studies and psychology) buildings on the other. This is the “set-up” for the “two cultures'” “war of words.” I remember as an undergraduate physics major debating an undergraduate political science major as we walked across campus which has had the greatest impact upon civilization, science or politics? We soon came to an impasse, an impasse that possibly could be blamed, in retrospect over the years, on the subjective trap. Ideas about the world outside us seemed at odds with ideas about our self-perception; where we see ourselves seemed very different from whom we see ourselves; what we are is different from whom we are.

Yet, despite being a physics major and coming down “hard” on the “science side” of the argument, I understood where the “subjective side” was coming from, as I was in the midst of attaining, in addition to my math minor, minors in philosophy and English; I was a physics major who really “dug” my course in existentialism. It was as if I “naturally” never accepted the “two cultures” divide; it was as if I somehow “knew” both the objective and the subjective had to co-exist to adequately describe human experience, to define the sequence of perception that defines a human’s lifespan. And, in this sense, if one’s lifespan can be seen as a spectrum of perception from birth to death of that individual, then, to that individual, perception IS everything.

How can the impossibility of the subjective trap be modeled? How can objectivity and subjectivity be seen as a symbiotic, rather than as an antagonistic, relationship within the human brain? Attempted answers to these questions constitute recent occurrences inside my brain.

 

Figure 1 is a schematic model of perception seen objectively – a schematic of the human brain and its interaction with sensory data, both from the world “outside” and from the mind “inside.” The center of the model is the “world display screen,” the result of a two-way flow of data, empirical (or “real world” or veridical) data from the left and subjective (or “imaginative” or non-veridical) data from the right. (Excellent analogies to the veridical/non-veridical definitions are the real image/virtual image definitions in optics; real images are those formed by actual rays of light and virtual images are those of appearance, only indirectly formed by light rays due to the way the human brain geometrically interprets signals from the optic nerves.) [For an extensive definition of veridical and non-veridical, see At Last, A Probable Jesus [August, 2015]] Entering the screen from the left is the result of empirical data processed by the body’s sense organs and nervous system, and entering the screen from the right is the result of imaginative concepts, subjective interpretations, and ideas processed by the brain. The “screen” or world display is perception emerging to the “mind’s eye” (shown on the right “inside the brain”) created by the interaction of this two-way flow.

 
Figure 1 is how others would view my brain functioning to produce my perception; Figure 1 is how I would view the brains of others functioning to produce their perceptions. This figure helps define the subjective trap in that I cannot see my own brain as it perceives; all I can “see” is my world display screen. Nor can I see the world display screens of others; I can only view the brains of others (outside opening up their heads) as some schematic model like Figure 1. In fact, Figure 1 is a schematic representation of what I see if I were to peer inside the skull of someone else. (Obviously, it is grossly schematic, bearing no resemblance to brain, nervous system, and sense organ physiology. Perhaps many far more proficient in neuro-brain function than I, and surely such individuals in future, can and will correlate those terms on the right side of Figure 1 with actual parts of the brain.)

 
Outside data collectively is labeled “INPUT” on the far left of Figure 1, bombarding all the body’s senses — sight, sound, smell and taste, heat, and touch. Data that stimulates the senses is labeled “PERCEPTIVE” and either triggers the autonomic nervous system to the muscles for immediate reaction (sticking your fingers into a flame) necessarily not requiring any processing or thinking, or, goes on to be processed as possible veridical data for the world display. However, note that some inputs for processing “bounce off” and never reach the world display; if we processed the entirety of our data input, our brains would “overload,” using up all brain function for storage and having none for consideration of the data “let in.” This overloading could be considered a model for so-called “idiot savants” who perceive and remember so much more than the “average” person (“perfect memories”), yet have subnormal abilities for rational thought and consideration. Just how some data is ignored and some is processed is not yet understood, but I would guess that it is a process that differs in every developing brain, resulting in no two brains, even those of twins, accepting and rejecting data EXACTLY alike. What is for sure is that we have evolved “selective” data perception over hundreds of thousands of years that has assured our survival as a species.
The accepted, processed data that enter our world display in the center of Figure 1 as veridical data from the outside world makes up the “picture” we “see” on our “screen” at any given moment, a picture dominated by the visual images of the objects we have before us, near and far, but also supplemented by sound, smell, tactile information from our skin, etc. (This subjective “picture” is illustrated in Figure 2.) The “pixels” of our screen, if you please, enter the subjective world of our brain shown on the right of Figure 1 in four categories – memory loops, ideas, self-perception, and concepts – as shown by the double-headed, broad, and straight arrows penetrating the boundary of the world display with the four categories. The four categories “mix and grind” this newly-entered data with previous data in all four categories (shown by crossed and looped broad, double-headed arrows) to produced imagined and/or reasoned data results back upon the same world display as the moment’s “picture” – non-veridical data moving from the four categories back into the display (thus, the “double-headedness” of the arrows). Thusly can we imagine things before us that are not really there at the moment; we can, for instance, imagine a Platonic “perfect circle” (non-veridical) not really there upon a page of circles actually “out there” drawn upon a geometry textbook’s page (veridical) at which we are staring. In fact, the Platonic “perfect circle” is an example of a “type” or “algorithmic” or symbolic representation for ALL circles created by our subjective imagination so we do not have to “keep up” will all the individual circles we have seen in our lifetime. Algorithms and symbols represent the avoidance of brain overload.

 
From some considered input into our four categories of the brain come “commands” to the muscles and nervous system to create OUTPUT and FEEDBACK into the world outside us in addition to the autonomic nerve commands mentioned above, like the command to turn the page of the geometry text at which we are looking. Through reactive and reflexive actions, bodily communication (e.g. talking), and environmental manipulation (like using tools), resulting from these feedback outputs into the real world (shown at bottom left of Figure 1), we act and behave just as if there had been an autonomic reaction, only this time the action or behavior is the result of “thinking” or “consideration.” (The curved arrow labeled “Considered” leading to the muscles in Figure 1.)

 

Note how Figure 1 places epistemological and existential terms like CONSCIOUSNESS, Imagination, Knowing, Intention & Free Will, and Reason in place on the schematic, along with areas of the philosophy of epistemology, like Empiricism, Rationalism, and Existentialism (at the top of Figure 1). These placements are my own philosophical interpretations and are subject to change and placement alteration indicated by a consensus of professional and amateur philosophers, in conjunction with consensus from psychologists and brain physiologists, world-wide.
Figure 2 is a schematic of the “screen” of subjective perception that confronts us at every moment we see, hear, smell, taste, and/or touch. Figure 2 is again crudely schematic (like Figure 1), in this case devoid of the richness of the signals of our senses processed and displayed to our “mind’s eye.” Broad dashed arrows at the four corners of the figure represent the input to the screen from the four categories on the right of Figure 1 – memory loops, ideas, perception, and concepts. Solid illustrated objects on Figure 2 represent processed, veridical, and empirical results flowing to the screen from the left in Figure 1, and dashed illustrated objects on Figure 2 represent subjective, non-veridical, type, and algorithmic results flowing to the screen from the right in Figure 1. Thus Figure 2 defines the screen of our perception as a result of the simultaneous flow of both veridical and non-veridical making up every waking moment.

PerceptPic1

Figure 1 — A Model of the Objectivity of Perception

 

(Mathematical equations cannot be printed in dashed format, so the solid equations and words, like History, FUTURE, Faith, and PRESENT, represent both veridical and non-veridical forms; note I was able to represent the veridical and non-veridical forms of single numbers, like “8” and certain symbols, like X, equals, and does not equal.) Thus, the solid lightning bolt, for example, represents an actual observed bolt in a thunderstorm and the dashed lightning bolt represents the “idea” of all lightning bolts observed in the past.

 

The “subjective trap” previously introduced above is defined and represented by the rule that nothing of Figure 1 can be seen on Figure 2, and vice-versa. In my “show-and-tell” presentation of this perception model encapsulated in both figures, I present the figures standing on end at right angles to each other, so that one figure’s area does not project upon the area of the other – two sheets slit half-height so that one sheet slides into the other. Again, a) Figure 2 represents my own individual subjective screen of perception no one else can see or experience; b) Figure 1 represents the only way I can describe someone else allegedly perceiving as I. I cannot prove a) and b) are true, nor can anyone else. I can only state with reasonable certainty that both someone else and I BEHAVE as if a) and b) are true. In other words, thanks to the common cultural experience of the same language, my non-color-blind friend and I in the room observing the red-painted card agree the card “is red.” To doubt our agreement that it is red would stretch both our limits of credulity into absurdity.

 
The model described above and schematically illustrated in Figures 1 and 2 can be seen as one way of describing the ontology of human beings, of describing human existence. Looking at Figure 1, anything to the left of the world display screen is the only way we know anything outside our brain exists and anything to the right of the world display screen is the only way we know we as “I’s” exist in a Cartesian sense; anything to the right is what we call our “mind,” and we assume we think with our mind; in the words of Descartes, “I think, therefore I am.” We see our mind as part of the universe being “bombarded” from the left, so we think of ourselves as part of the universe. Modern science has over the centuries given us some incredible ontological insights, such as all physical existence is made up of atoms and molecules and elementary particles; we can objectively or “scientifically” describe our existence, but we do so, as we describe anything else, with our subjective mind; we, as self-conscious beings, describe the veridical in the only way we possibly can – non-veridically. Thus, the model suggests the incredible statement made by scientists and philosophers of science lately. Recalling that atoms are created in the interior of stars (“cooked,” if you please, by nuclear fusion inside stars of various sizes and temperatures) that have long since “died” and spewed out their atoms in

PerceptPic2

Figure 2 — A Model of the Subjectivity of Perception (The “Screen”)

 

contribution to the formation of our own solar system around 13.5 billion earth years ago, and recalling our bodies, including our brains, are made of molecules made from the atoms from dead and gone stars, the statement “We are ‘star-stuff’ in self-contemplation” makes, simultaneously, objective and subjective, or scientific and artistic, “spiritual sense.”

We can veridically “take in,” “observe,” “experience,” or “contemplate” anything from the vast universe outside our body as well as the vast universe inside our body outside our brain while at the same time we can imagine non-veridically limitless ways of “making sense” of all this veridical data by filing it, storing it, mixing it, and thinking about it, all within our brain. We are limitless minds making up part of a limitless universe.

 

As if that was not enough, each of us, as a veridical/non-veridical “package of perception,” is unique. Every human has a unique Figure 1 and a unique Figure 2. Our existence rests upon the common human genome of our species, the genetic “blueprint” that specifies the details of our biological existence. Yet, every individual’s genome is different from every other (even if only by .1% or by a factor of .001), just considering that mutations even for identical twins make their two “blueprints” slightly different once the two organisms exist as separated zygotes in the womb. Moreover, how we behave, and, therefore, how we respond non-veridically to the veridical data we receive individually, even from the same environment shared by others, is mitigated by the unique series of experiences each of us has had in our past. Hence, each person is a unique individual genome subjected to unique environmental experiences, the exact copy of which cannot possibly statistically exist.

 

The world display screen of an individual in any given moment has never been perceived before, nor will it ever be perceived again, as in the next moment the screen is modified by the dual flux of the veridical flux from the left and the non-veridical flux from the right in Figure 1. The life of an individual is a series of receiving this ever-changing dual flux and thinking or acting in the real world upon the basis of this dual flux; it is a series of two-way perceptions. The life of an individual is observed by another individual as a series of perceived behaviors assumed, but never proven, to be generated in the same way as those of the observer. All in the span of a human life is perception; to an individual human being, perception has to be everything.

 

This model suggests to me the absurdity of having objectivity and subjectivity irreconcilably separate; it suggests, rather, that they are inseparable; they go together like, in the words of the song, “horse and carriage” or “love and marriage.” The blending of objective data and imaginative concepts in our brain makes our perception, our conscious “everything,” or existence as a self-conscious being, if you please, possible. What we are is the veridical of our screen of perception; who we are is the non-veridical of the screen. In other words, the scientist is as potentially subjective as the poet, and the poet is as potentially objective as the scientist; they differ only in the emphases on the contents of their respective screens of perception. For the “two sides” of campuses of higher learning to be at “war” over the minds of mankind is absurd – as absurd as the impasse the political science major and I reached in conversation so many years ago.

 
If the above was all the model and its two figures did, its conjuring would have been well worth it, I think, but the above is just the tip of the iceberg of how the model can be applied to human experience. Knowing how prone we are to hyperbole when talking about our “brain children,” I nonetheless feel compelled to suggest this model of conception can be intriguingly applied to almost any concept or idea the human brain can produce – in the sense of alternatively defining the concept using “both worlds,” both the objective and the subjective, instead of using one much more than the other. In other words, we can define with this model almost anything more “humanly” than before; we can define and understand almost anything with “more” of ourselves than we’ve done in the past.

 

Take the concept of the human “soul” for example. It seems to me possible that cultures that use the concept of soul, whether in a sacred or secular sense, whether in the context of religion or psychology, they are close to using the concept of the “mind’s eye” illustrated in Figure 1 of the model. The “mind’s eye” is the subjective “I,” the subjective observer of the screen, the “see-er,” the “smell-er,” the “taste-er,” the “hear-er,” the “touch-er,” the “feel-er” of perception; the soul is the active perceiver of subjective human experience. The soul defines self-consciousness; it is synonymous with the ego. This view is consistent with the soul being defined as the essence of being alive, of being that which “leaves” the body upon death. Objectively, we would say that death marks the ceasing of processing veridical data; subjectively, we would say that death marks the ceasing of producing non-veridical data and the closing of the “mind’s eye.”

 

Yet the soul is a product of the same physiology as the pre-conscious “body” of our evolutionary ancestors. In other words, the soul “stands upon the shoulders” of the id, our collection of instincts hewn over millions of years. So, in addition, we would objectively say that death also marks the ceasing of “following” our instincts physically and mentally; our unique, individual genome stops defining our biological limitations and potentialities. The elements of our body, including our brain, eventually blend to join the elements of our environment. Objectively, we would say death marks our ceasing to exist as a living being. The concept of the soul allows death to be seen as the “exiting” or “leaving” of that necessary to be called “alive.”

 
So, the concept of the soul could be discussed as the same or similar to the concept of the ego, and issues such as when does a developing human fetus (or proto-baby) develop or “receive” a soul/ego, which in turn has everything to do with the issue of abortion, can be discussed without necessarily coming to impasses. (See my The ‘A’ Word – Don’t Get Angry, Calm Down, and Let Us Talk, [April, 2013] and my The ‘A’ Word Revisited (Because of Gov. Rick Perry of Texas), or A Word on Bad Eggs [July, 2013]) I said “could be,” not “will be” discussed without possibly coming to impasses. Impasses between the objective and subjective seem more the norm than the exception, unfortunately; the “two cultures war” appears ingrained. Why?

 
Earlier, I mentioned causally the answer the model provides to this “Why?”. The scientist/engineer and the artist/poet differ in their emphases of either the veridical flux to the world display screen or the non-veridical flux to the same world display screen of their individual brains. By “emphasis” I merely mean assigning more importance by the individual to one flux direction or the other in his/her head. At this point, one is reminded of the “left-brain, right-brain” dichotomy dominating brain/mind modeling since the phenomenon of the bicameral mind became widely accepted. The perception model being presented here incorporates on the non-veridical side of the perception screen both analytical (left) brain activity and emotional (right) brain activity in flux to the screen from the right side of Figure 1. Just like my use of left/right in Figure 1 is not like the use of left/right in bicameral mind/brain modeling, this model of perception is not directly analogous to bicameral modeling. What the perception model suggests, in my opinion, is that the analytical/emotional chasm of the human brain is not as unbridgeable as the “left-brain-right-brain” view might suggest.

More specifically, the perception model suggests that the “normal” or “sane” person keeps the two fluxes to the world display screen in his/her head “in balance,” always one flux mitigating and blending with the other. It is possible “insanity” might be the domination of one flux over the other so great that the dominated flux is rendered relatively ineffective. If the veridical flux is completely dominant, the person’s mind is in perpetual overload with empirical data, impotent to sort or otherwise deal with the one-way bombardment on his/her world display screen; such a person would presumably be desperate to “turn off” the bombardment; such a person would be driven to insanity by sensation. If the non-veridical flux is completely dominant, the person’s mind is in a perpetual dream of self-induced fantasy, sensing with all senses, that which is NOT “out there;” such a person would be driven to insanity by hallucination. In this view, the infamous “acid trips” of the 1960’s induced by hallucinatory drugs such as LSD could be seen as self-induced temporary periods of time in which the non-veridical flux “got the upper hand” over the veridical flux.

This discussion of “flux balance” explains why dreams are depicted in Figure 1 as “hovering” just outside the world display screen. The perception model suggests dreams are the brain’s way of keeping the two fluxes in balance, keeping us as “sane” as possible. In fact, the need to keep the fluxes in balance, seen as the need to dream, may explain why we and other creatures with large brains apparently need to sleep. We need “time outs” from empirical data influx (not to mention “time outs” just to rest the body’s muscular system and other systems) to give dreaming the chance to balance out the empirical with the fanciful on the stage of the world display. Dreams are the mixtures of the veridical and non-veridical not needed to be stored or acted upon in order to prevent overload from the fluxes of the previous day (or night, if we are “night owls”); they play out without being perceived in our sleeping unconsciousness (except for the dreams we “remember” just before we awaken) like files in computer systems sentenced to the “trash bin” or “recycle bin” marked for deletion. Dreams can be seen as a sort of “reset” procedure that prepares the world display screen to ready for the upcoming day’s (or night’s) two-way flux flow that defines our being awake and conscious.

This model might possibly suggest new ways of defining a “scientific, analytical mind” (“left brain”) and comparing that with an “artistic, emotional mind” (“right brain”). Each could be seen as a slight imbalance (emphasis on “slight” to remain “sane”) of one flux over the other, or, better, as two possible cases of one flux mitigating the other slightly more. To think generally “scientifically,” therefore, would be when the non-veridical flux blends “head-on” upon the world display screen with the veridical flux and produces new non-veridical data that focuses primarily upon the world external to the brain; the goal of this type non-veridical focus is to create cause/effect explanations, to problem-solve, to recognize patterns, and to create non-veridically rational hypotheses, or, as I would say, “proto-theories,” or scientific theories in-the-making. Thus is knowledge about the world outside our brain increased. To think generally “artistically,” on the other hand, would be when the non-veridical flux takes on the veridical flux upon the world display screen as ancillary only, useful in focusing upon the “world” inside the brain; the goal of this type non-veridical focus is to create new ways of dealing with likes, dis-likes, and emotions, to evoke “feelings” from morbid to euphoric, and to modify and form tastes from fanciful thinking to dealing emotionally with the external world in irrational ways. Thus is knowledge about what we imagine and about what appears revealed to us inside our brain increased.

With these two new definitions, it is easy to see that we have evolved as a species capable of being simultaneously both scientific and artistic, both “left-brain” and “right-brain;” as I said earlier, the scientist is as potentially subjective as the poet, and the poet is as potentially objective as the scientist. We do ourselves a disservice when we believe we have to be one or the other; ontologically, we are both. Applying the rule of evolutionary psychology that any defining characteristic we possess as a species that we pass on to our progeny was probably necessary today and/or in our past to our survival (or, at minimum, was “neutral” in contributing to our survival), the fact we are necessarily a scientific/artistic creature was in all likelihood a major reason we evolved beyond our ancestral Homo erectus and “triumphed” over our evolutionary cousins like the Neanderthals. When we describe in our midst a “gifted scientist” or a “gifted artist” we are describing a person who, in their individual, unique existence purposely developed, probably by following their tastes (likes and dislikes), one of the two potentialities over the other. The possibility that an individual can be gifted in both ways is very clear. (My most memorable example of a “both-way” gifted person was when I, as a graduate student, looked in the orchestra pit at a production of Handel’s Messiah and saw in the first chair of the violin section one of my nuclear physics professors.) Successful people in certain vocations, in my opinion, do better because of strong development of both their “scientific” and “artistic” potentialities; those in business and in service positions need the ability to simultaneously successfully deal with problem solving and dealing with the emotions of colleagues and clientele. Finding one’s “niche” in life and in one’s culture is a matter of taste, depending on whether the individual feels more comfortable and satisfied “leaning” one way or another, or, being “well-rounded” in both ways.

Regardless of the results of individual tastes in individual circumstances, the “scientist” being at odds with the “artist” and vice-versa is always unnecessary and ludicrous; the results of one are no better or worse than those of another, as long as those results come from the individual’s volition (not imposed upon the individual by others).

 

From the 1960’s “acid rock, hard rock” song by Jefferson Airplane, Somebody to Love:

When the truth is found to be……lies!
And all the joy within you…..dies!
Don’t you want somebody to love?
Don’t you need somebody to love?
Wouldn’t you love somebody to love?
You better find somebody to love!

These lyrics, belted out by front woman Grace Slick, will serve as the introduction to two of the most interesting and most controversial applications of this perception theory. The first part about truth, joy, and lies I’ll designate as GS1, for “Grace Slick Point 1” and the second part about somebody to love I’ll designate as GS2.

Going in reverse order, GS2 to me deals with that fundamental phenomenon without which our cerebral species or any such species could not have existed – falling in love and becoming parents, or, biologically speaking, pair bonding. The universal human theme of erotic love is the basis of so much of culture’s story-telling, literature, poetry, and romantic subjects of all genres. Hardwired into our mammalian genome is the urge, upon the outset of puberty, to pair-bond with another of our species and engage, upon mutual consent, in sexual activity. If the pair is made of two different genders, such activity might fulfill the genome’s “real” intent of this often very complex and convoluted bonding – procreation of offspring; procreation keeps the genes “going;” it is easily seen as a scientific form of “immortality;” we live on in the form of our children, and in our children’s children, and so on. Even human altruism seems to emerge biologically from the urge to propagate the genes we share with our kin.

Falling in love, or pair bonding, is highly irrational, and, therefore a very non-veridical phenomenon; love is blind. When one is in love, the short comings of the beloved are ignored, because their veridical signals are probably blocked non-veridically by the “smitten;” when one is in love, and when others bring up any short comings of the beloved, they are denied by the “smitten,” often in defiance of veridical evidence. If this were not so, if pair bonding was a rational enterprise, much fewer pair bonds would occur, perhaps threatening the perpetuation of the species into another generation. [This irrationality of procreation was no better defined than in an episode of the first Star Trek TV series back in the 1960’s, wherein the half human-half alien (Vulcan) Enterprise First Science Officer Spock (played by Leonard Nimoy) horrifically went apparently berserk and crazy in order to get himself back to his home planet so he could find a mate (to the point of hijacking the starship Enterprise). I think it was the only actual moment of Spock’s life on the series in which he was irrational (in which he behaved like we – fully human.]

GS1 is to me another way of introducing our religiosity, of asking why we are as a species religious. This question jump-started me on my “long and winding road,” as I called it – a personal Christian religious journey in five titles, written in the order they need to be read: 1) Sorting Out the Apostle Paul [April, 2012], 2) Sorting Out Constantine I the Great and His Momma [Feb., 2015], 3) Sorting Out Jesus [July, 2015], 4) At Last, a Probable Jesus [August, 2015], and 5) Jesus – A Keeper [Sept., 2015]. Universal religiosity (which I take as an interpretation of GS1) is here suggested as being like the universality of the urge to procreate, though not near as ancient as GS2. As modern humans emerged and became self-conscious, they had to socially bond into small bands of hunter-gatherers to survive and protect themselves and their children, and the part of the glue holding these bands together was not only pair-bonding and its attendant primitive culture, but the development of un-evidenced beliefs – beliefs in gods and god stories – to answer the then unanswerable, like “What is lightning?” and “How will we survive the next attack from predators or the enemy over the next hill?” In other words, our non-veridical faculties in our brain dealt with the “great mysteries” of life and death by making up gods and god stories to provide assurance, unity, fear, and desperation sufficient to make survival of the group more probable. Often the gods took the shape of long-dead ancestors who “appeared” to individuals in dreams (At Last, a Probable Jesus [August, 2015]). Not that there are “religious genes” like there are “procreate genes,” but, rather, our ancestors survived partly because the genes they passed on to us tended to make them cooperative for the good of the group bound by a set of accepted beliefs – gods and god stories; that is, bound by “religion.”

The “lies” part of GS1 has to do with the epistemological toxicity of theology (the intellectual organization of the gods and god stories) – religious beliefs are faith-based, not evidence-based, a theme developed throughout the five parts of my “long and winding road.” On p. 149 of Jerry A. Coyne’s Faith vs. Fact, Why Science and Religion are Incompatible (ISBN 978-0-670-02653-1), the author characterizes this toxicity as a “metaphysical add-on….a supplement demanded not by evidence but by the emotional needs of the faithful.” Any one theology cannot be shown to be truer than any other theology; all theologies assume things unnecessary and un-evidenced; yet, all theologies declare themselves “true.” As my personal journey indicates, all theologies are exposed by this common epistemological toxicity, yet it is an exposé made possible only since the Enlightenment of Western Europe and the development of forensic history in the form of, in the case of Christianity, higher Biblical criticism. This exposé, in my experience, can keep your “joy” from dying because of “lies,” referring back to GS1.

Both GS1 and GS2 demonstrate the incredible influence of the non-veridical capabilities of the human brain. A beloved one can appear on the world display screen, can be perceived, as “the one” in the real world “out there,” and a god or the lesson of a god story can appear on the world display screen, can be perceived, as actually existing or as being actually manifest in the real world “out there.”

Putting GS1 in more direct terms of the perception model represented by Figures 1 and 2, non-veridical self-consciousness desires the comfort of understandable cause and effect as it develops from infancy into adulthood; in our brains we “need” answers — sometimes any answers will do; and the answers do not necessarily have to have veridical verification. Combining the social pressure of the group for conformity and cooperation, for the common survival and well-being of the group, with this individual need for answers, the “mind,” the non-veridical, epiphenomenal companion of our complex brain, creates a personified “cause” of the mysterious and a personified “answerer” to our nagging questions about life and death in general and in particular; we create a god or gods paralleling the created god or gods in the heads of those around us who came before us (if we are not the first of the group to so create). We experience non-veridically the god or gods of our own making through dreams, hallucinations, and other visions, all seen as revelations or visitations; these visions can be as “real” as the real objects “out there” that we sense veridically. (See At Last, a Probable Jesus [August, 2015] for examples of non-veridical visions, including some of my own.) Stories made up about the gods, often created to further explain the mysteries of our existence and of our experiences personally and collectively, combine with the god or gods to form theology. Not all of theology is toxic; but its propensity to become lethally dangerous to those who created it, when it is developed in large populations into what today are called the world’s “great religions,” and fueled by a clergy of some sort into a kind of “mass hysteria” (Crusades, jihads, ethnic “cleansings,” etc.), makes practicing theology analogous to playing with fire. As I pointed out in Jesus – A Keeper [Sept., 2015], epistemologically toxic theology is dangerously flawed. Just as we have veridically created the potential of destroying ourselves by learning how to make nuclear weapons of mass destruction, we have non-veridically created reasons for one group to try and kill off another group by learning how to make theologies of mass destruction; these theologies are based upon the “authority” of the gods we have non-veridically created and non-veridically “interpreted” or “listened to.” It is good to remember Voltaire’s words, or a paraphrase thereof: “Those who can make you believe absurdities can make you commit atrocities.”

Also remember, the condemnation of toxic theology is not the condemnation of the non-veridical; a balance of the veridical flux and the non-veridical flux was absolutely necessary in the past and absolutely necessary today for our survival as individuals, and, therefore, as a species. Toxic theology, like fantasy, is the non-veridical focused upon the non-veridical – the imagination spawning even more images without checking with the veridical from the “real world out there.” Without reference to the veridical, the non-veridical has little or no accountability toward being reliable and “true.” All forms of theology, including the toxic kind, and all forms of fantasy, therefore, have no accountability toward reality “out there” outside our brains. Harmony with the universe of which we are a part is possible only when the non-veridical focuses upon referencing the veridical, referencing the information coming through our senses from the world “out there.” This is the definition of “balance” of the two fluxes to our world display screens in our heads.

Comparing this balanced flux concept with the unbalanced one dominated by the non-veridical (remember the unbalanced flux dominated by the veridical is brain overload leading to some form of insanity), it is easy to see why biologist Richard Dawkins sees religiosity as a kind of mental disease spread like a mental virus through the social pressures of one’s sacred setting and through evangelism. Immersing one’s non-veridical efforts into theology is in my opinion this model’s way of defining Dawkins’ “religiosity.” In the sense that such immersion can often lead to toxic theology, it is easy to see the mind “sickened” by the non-veridical toxins. Whether Dawkins describes it as a mental disease, or I as an imbalance of flux dominated by the non-veridical, religiosity or toxic theology is bad for our species, and, if the ethical is defined as that which is good for our species, then toxic theology is unethical, or, even, evil.

To say that the gods and god stories, which certainly include the Judeo-Christian God and the Islamic Allah, are all imaginative, non-veridical products of the human mind/brain is not necessarily atheistic in meaning, although I can understand that many a reader would respond with “atheist!” Atheism, as developed originally in ancient Greece and further developed after the European Enlightenment in both Europe and America, can be seen as still another form of theology, though a godless one, potentially as toxic as any other toxic theology. Atheism pushing no god or gods can be as fundamentalist as any religion pushing a god or gods, complete with its dogma without evidence, creeds without justification, evangelism without consideration of the evangelized, and intolerance of those who disagree; atheism can be but another religion. Atheism in the United States has in my opinion been particularly guilty in this regard. Therefore, I prefer to call the conclusions about religion spawned by this perception model as some form of agnostic; non-veridical products of the brain’s imagination might be at their origin religious-like (lacking in veridical evidence or dream-like or revelatory or hallucinatory) but should never be seen as credible (called epistemologically “true”) and worthy of one’s faith, belief, and tastes until they are “weighed” against the veridical information coming into the world display screen; and when they can be seen by the individual as credible, then I would ask why call them “religious” at all, but, rather, call them “objective,” “scientific,” “moral,” “good,” or “common sense.” I suggest this because of the horrendous toxicity with which religions in general and religions in particular are historically shackled.

We do not have to yield to the death of GS1 (When the truth is found to be lies, and all the joy within you dies!); GS2 (Love is all you need, to quote the Beatles instead of Grace Slick) can prevent that, even if our irrational love is not returned. In other words, we do not need the gods and god stories; what we need is the Golden Rule (Jesus – A Keeper [Sept., 2015]). This is my non-veridical “take” on the incredible non-veridical capabilities encapsulated in GS1 and GS2.

Western culture has historically entangled theology and ethics (No better case in point than about half of the Ten Commandments have to do with God and the other half have to do with our relationship to each other.) This entanglement makes the condemnation of theology suggested by this perception model of human ontology an uncomfortable consideration for many. Disentanglement would relieve this mental discomfort. Christianity is a good example of entangled theology and ethics, and I have suggested in Jesus – A Keeper [Sept., 2015] how to disentangle the two and avoid the “dark side” of Christian theology and theology in general.

Ethics, centered around the Golden Rule, or the Principle of Reciprocity, is clearly a product of non-veridical activity, but ethics, unlike theology and fantasy, is balanced with the veridical, in that our ethical behavior is measured through veridical feedback from others like us “out there.” We became ethical beings similarly to our becoming religious beings – by responding to human needs. Coyne’s book Faith vs. Fact, Why Science and Religion are Incompatible points out that in addition to our genetic tendency (our “nature”) to behave altruistically, recognize taboos, favor our kin, condemn forms of violence like murder and rape, favor the Golden Rule, and develop the idea of fairness, we have culturally developed (our “nurture”) moral values such as group loyalty, bravery, respect, recognition of property rights, and other moral sentiments we define as “recognizing right from wrong.” Other values culturally developed and often not considered “moral” but considered at least “good” are friendship and senses of humor, both of which also seem present in other mammalian species, suggesting they are more genetic (nature) than cultural (nurture). Other culture values (mentioned, in fact, in the letters of the “Apostle” Paul are faith, hope, and charity, but none of these three need have anything to do with the gods and god stories, as Paul would have us believe. Still others are love of learning, generosity (individual charity), philanthropy (social charity), artistic expression of an ever-increasing number of forms, long childhoods filled with play, volunteerism, respect for others, loyalty, trust, research, individual work ethic, individual responsibility, and courtesy. The reader can doubtless add to this list. Behaving as suggested by these ideas and values (non-veridical products) produce veridical feedback from those around us that render these ideas accountable and measurable (It is good to do X, or it is bad to do X.) What is good and what is bad is veridically verified, so that moral consensus in most of the groups of our species evolves into rules, laws, and sophisticated jurisprudence (e.g. the Code of Hammurabi and the latter half of the Ten Commandments). The group becomes a society that is stable, self-protecting, self-propagating, and a responsible steward of the environment upon which the existence of the group depends; the group has used its nature to nurture a human ethical set of rules that answers the call of our genes and grows beyond this call through cultural evolution. The irony of this scenario of the origin of ethics is that humans non-veridically mixed in gods and god stories (perhaps necessarily to get people to respond by fear and respect for authority for survival’s sake), and thereby risked infection of human ethics by toxic theology. Today, there is no need of such mixing; in fact, the future of human culture may well hinge upon our ability to separate, once and for all, ethics from theology.

A final example of applying the perception model illustrated by Figures 1 and 2 for this writing is the definition of mathematics. Mathematics is clearly a non-veridical, imaginative product of the human brian/mind; this is why all the equations in Figure 2 need a “dashed” version in addition to the “solid,” as I was able to do for the single numbers like “8.” But why is math the language of science? Why is something so imaginative so empirically veridical? In other words, why does math describe how the world works, or, why does the world behave mathematically?

Math is the quintessential example of non-veridical ideas rigidly fixed by logic and consistent patterns; math cannot deviate from its own set of rules. What “fixes” the rules is its applicability to the veridical data bombarding the world display screen from the “real” world “out there.” If math did not have its utility in the real world (from counting livestock at the end of the day to predicting how the next generation of computers can be designed) it would be a silly game lodged within the memory loops of the brain only. But, the brain is part of the star-stuff contemplating all the other star-stuff, including itself; it makes cosmological “sense” that star-stuff can communicate with itself; the language of that communication is math. Mathematics is an evolutionary product of evolutionary complexity of the human brain; it is the ultimate non-veridical focus upon the veridical. Mathematics is the “poster child” of the balance of the two fluxes upon the world display screen of every human brain/mind. No wonder the philosopher Spinoza is said to have had a “religious, emotional” experience gazing at a mathematical equation on paper! No wonder we should teach little children numbers at least as early as (or earlier than) we teach them the alphabet of their native culture!

Further applications of the perception model suggest themselves. Understanding politics, economics, education, and early individual human development are but four.

I understand the philosophical problem of a theory that explains everything might very well explain nothing. But this perception model is an ontological theory, which necessarily must explain some form of existence, which, in turn, entails “everything.” I think the problem is avoided by imagining some aspect of human nature and culture the model cannot explain. For instance, my simplistic explanation of insanity as a flux imbalance may be for those who study extreme forms of human psychosis woefully inadequate. Artists who see their imaginations more veridically driven than I may have suggested might find the model in need of much “tuning,” if not abandonment. I have found the model personally useful in piecing together basic, separate parts of human experience into a much-more-coherent and logically unified puzzle. To find a harmony between the objective and the subjective of human existence is to me very objective (intellectually satisfying) and subjective (simultaneously comforting and exciting). The problem of explaining nothing is non-existent if other harmonies can be conjured by others. Part of my mental comfort comes from developing an appreciation for, rather than a frustration with, the “subjective trap,” the idea introduced at the beginning.

RJH

Post Navigation