B.I.T. (Beta Test) Part 1: Memory

This is an excerpt from an attempt to expand an idea I started with this short story.  I’d love as much feedback as possible.  Enjoy!

I think I was part of the last generation that didn’t grow up with BIT. I mean it was around, but nobody really understood how to use it. It wasn’t everywhere like it is now. I’ve seen kids with chips that couldn’t have been older than 10. That’s just crazy. You don’t even know how to use your brain yet. But I guess it’s just progress. I mean I’m glad I have my chip. I think that BIT is probably mankind’s greatest achievement, but I’m glad I had my brain to myself for as long as I did.  Take nostalgia for example.  With post-chip memory it’ll be a thing of the past.  All those little annoyances of an experience that fade away as you get older, they’ll be there in vivid detail, so no nostalgia bias.

See, one of the first mainstream chip features was memory upgrades.  Problem was everyone did them differently.  The first ones all tried basic SDB, sensory data backup, which is basically just recording the standard sense-data to the cloud.  But then to recall the memory, you had to redirect the neural pathway from the hippocampal formation to the memory database in the cloud, which everyone soon found out was a bad idea.  With kids it was ok, their brains are still pretty plastic and open to rewiring.  But in adults, this is one of the strongest neural pathways in the brain.  So strong it would actually override the chip’s rewiring, which consumers saw as paying for shit that didn’t work.  So then they tried storing the sense-data in the hippocampus, but that got tricky too, because the brain already stores the memories itself, so they had to find a way to attach the sense-data to the memory the brain created.  Problem is, the way memory works, the brain only really stores the information from the memory that it feels is important.  Over time, the more you recall the event, the information you use from it once you recall it, that’s what the brain reinforces.  So if you don’t use all the sense-data stored for the memory – and who does? – the brain naturally dismisses it.  Which, again, just comes off as faulty programming.  So then someone said, ‘Well why are we trying to store information where the brain stores memories? Why don’t we just store it where the brain naturally stores information?’  Which, yeah, it’s obvious once you know it, but just no one had thought of it yet. So what we did was send the sense data to the parahippocampal cortices, which is where the brain stores semantic memory (facts, information, data), then all it took was strengthening the neural connection between the parahippocampal cortices and the hippocampus itself, where information is stored and where the memory is stored, which already existed. Just fire a few hundred neurons through it at the startup and the brain automatically reinforces it. Then it stays strong from there because it’s actually useful information to have when recalling memories.

So what resulted was two different sets of memories, pre-chip and post-chip, though technically it should be pre- and post-SDB, but no one says that.  The pre-chip memories are still vague and fuzzy like natural, but the post-chip ones are fully-detailed, vivid.  You can almost relive the moment in a weird way.  What’s really weird is when you recall your pre-chip memories, the recollection gets stored by the SDB processors, so you have a post-chip memory of recalling a pre-chip memory. This really made people realize how faulty pre-chip memory was, because they could look at every time they recalled the memory and see that it changed each time.  And the post-chip hippocampus tries to assimilate all of these different details into one memory, but it can’t.  They’re too different, and they contradict each other.

Anyway, the younger you get the chip, the less of those memories you have.  And kids are getting the chip younger and younger.  Pretty soon they’ll just be putting it in babies and no one will even remember what natural memories were like.  I’m just glad I was born when I was.  I like my pre-chip memories.  Somehow they feel more real, even though they’re actually less accurate.  I think there’s something kind of special about that process.  Maybe it’s just the thought of it being extinct that makes it feel that way, but I don’t know.  Maybe we’re supposed to forget certain things.  Maybe the past is supposed to look different every time you remember it.  I mean, our brains could have adapted to keep memories exactly as they happened, but it didn’t.  I’m not saying there’s a reason, like intelligent design or anything like that, I’m just saying the brain saw some reason to do it this way.  But I guess we know better than our brains now, don’t we?

Where Is My Mind?


As scientific research exponentially expands and progresses its reach and grasp, the role of the philosopher has become somewhat marginalized.  In ancient times, it seemed almost a prerequisite for scientists to also take part in philosophy, hence the greats like Aristotle and Pythagoras.  But now, as science becomes much more complicated and all-enveloping, the scientists of today hardly have time to sit back and process the information they are discovering.  As the scientists spend long nights crunching numbers, it has become the role of the philosopher to put the information that science discovers into context for the laymans, those of us unwilling or unable to do the number-crunching.

One long-standing problem of philosophy is that of consciousness.  Since the dawn of philosophy, thinkers have tried to find the right place to put consciousness in our logical picture of the world, and have had nothing but trouble doing so.  The majority of our logical reasoning is about the material world, which appears to behave more or less by logical principles.  But when it comes to placing consciousness, philosophers have more often than not steered away from materialism and placed consciousness in the realm of the metaphysical.  But as logical people have continuously done away with the metaphysical, we have tried harder and harder to pull our consciousnesses out of that realm and into our logical picture of the world, but still to no avail.  The philosopher most often cited when it comes to these matters is DesCartes, who championed the concept of dualism.  Cartesian dualism asserts that the only thing we can know exists is our own consciousness, yet that consciousness cannot be said to exist in the physical world.  So we are left with both the empirical view that nothing but our consciousness exists, and the materialist view that our consciousness doesn’t exist.  But dualism is a hard pill to swallow for many.  It flies in the face of our need for everything to fit into a logical picture.  This has caused many people to dismiss consciousness as a by-product of brain function, the end result of data analysis.

Enter neuroscience: a complex and quickly-growing branch of biochemistry that attempts to map the events that occur in our brain under certain circumstances.  The more we map out the processes of the brain, the more advocates of a metaphysical mind have had to strip down the definition of consciousness.  Things like memories, emotions, and even some abstract thinking have now fallen under the category of what can be explained through materialistic neuroscience, causing advocates of the physical consciousness to theorize that one day all of consciousness will be defined by physical processes of the brain.  This becomes the fulcrum of the debate, the materialists claiming that just because we haven’t found a physical explanation for consciousness yet, doesn’t mean that one doesn’t exist, and the metaphysicalists(?) stating that the true definition of consciousness evades physical science.

One contemporary philosopher who has championed this debate is David Chalmers.  Chalmers has done a fantastic job of defining where we can draw the line between the physical and metaphysical consciousness.  He has dubbed these two categories as the ‘easy problem’ and the ‘hard problem’ of consciousness.  According to Chalmers, the easy problem of consciousness describes the entire process of data analysis, while the hard problem has to do with subjective experience.  While materialists claim that subjective experience is the end result of data analysis, Chalmers believes there is a fundamental difference.  This difference is something he calls ‘qualia.’  Qualia is the subjective experience of sense-data.  For example, as your eyes take in a certain wavelength of light, your brain processes that wavelength (perhaps incorrectly as discussed in the Limits of Language post) as the color red.  But the physical data of the wavelength has no correlation to your definition of ‘red’ in your mind.  One simple thought experiment to better grasp this concept is to imagine a person whose color spectrum is somehow switched.  This person would see red as violet, and vice versa, and, similar to the negative of a photograph, all the other colors would follow suit.  Now this person would grow up learning to call what we define as red ‘violet’ and so on.  There would be no way to tell that this person’s color spectrum is switched, because there is no way to observe his subjective experience.

While Chalmers has given us a terrific vocabulary to discuss this debate, I think there is an easier method to understanding the difference between our brains and our minds, and that is the struggle between the two.  We humans have always and eternally waged a battle with our brains.  We know full well that our brain can play tricks on us.  Our data analysis processes can lead us to false information, yet we can be fully aware of it.  For example, when we watch a magician or look at an optical illusion, we are willingly participating in a presentation of the fallacy of our minds.  We are fully conscious of the fact that our data analysis is feeding us false information.  We’ve entered into a reciprocal process of data analysis where we let our sense-data deceive us, yet use the knowledge of the deception of our sense-data to put the illusion into its proper context, so that we don’t think the magician is some kind of demon.  This knowledge of the fallacies of our brain functions permeates the rest of our lives as well.  As psychological theories have entered the common vocabulary, the contemporary person may be well aware of his or her own psychological idiosyncrasies, and behave accordingly.  The common phrase, ‘the first step to recovery is admitting you have a problem,’ is a perfect example.  The knowledge of a fallacy of the mind, and the definition of it as such, allows our consciousness to take that knowledge into account when making a decision, and choose whether or not to act on that fallacy, or rise above it.  For example, if we recognize the times we let our emotions govern our decisions, then the next time it happens we can choose to ignore our emotions and govern ourselves according to our reasoning.

Now none of this unequivocally proves the metaphysical mind, but it is a rather interesting notion that we can be ‘conscious’ of the fallacies of our data-analysis processes.  The question begged here is whether or not this is simply another level of data analysis, or if the knowledge of these fallacies is evidence of a transcendent consciousness.

The Limits of Language

“The limits of my language are the limits of my world” – Ludwig Wittgenstein

Remember when you were learning the colors as a child? Of course you don’t, you were much too young. But imagine trying to teach a child colors. What would you do? You would probably make a little flashcard or something of each of the seven primary and secondary colors and show them to the infant over and over again. And that would probably work. But take a look at the color spectrum below.

Can you tell me where the red ends and the orange begins? Drawing a definite line on that spectrum becomes pretty difficult. And there are literally an infinite amount of wavelengths along that spectrum that we whittle down into seven categories. So why do we do this?

A recent study into developmental psychology shows that infants are more perceptible to subtle changes in hue than full-grown adults. This is because they have not yet been taught to categorize colors into the seven categories that we have words for. More accurately, infants process colors using the right hemisphere of their brain, the one responsible for creativity and imagination, but adults process them using the left, the hemisphere more concerned with analyzing language and data. This means that learning the names of colors actually re-routes our perception of them to the other side of the brain, and actually causes us to become effectively “color-blind” to subtle differences in hue that infants can perceive much more easily.

This phenomenon is called linguistic relativity, and to fully understand it, let’s try another thought experiment. Imagine that instead of a human baby, you were dealing with a computer baby. Now this baby thinks like a computer; it retains information instantly and processes data quantitatively as opposed to qualitatively. How would you teach it the colors? You would have to draw a line on that spectrum and define each color as being every wavelength between two of those lines. But that’s not how we want our computers to process colors. We want them to be able access every wavelength along the spectrum separately, so we give them a system of numbers that correspond to certain amounts of each primary color. But if this is a more accurate way to process color data, why do we humans use names like ‘fuchsia’ or ‘magenta’ to signify slight changes in hue along the spectrum? Why do we use qualitative language to define a quantitative world?

The answer is that we do not think quantitatively. To even understand a numerical system, we have to assign characters to each quantity, the same way we assign characters to represent sounds that our mouths make that in turn represent objects around us or even abstract concepts. We need a language in order to understand anything. So how did this come about? The obvious assumption is that language develops naturally out of a necessity to communicate with each other. Many animals make different sounds that mean different things in order to communicate to each other, but can we really consider this ‘language’ the same way we use ours? These animals cannot use their language to discuss abstract thoughts or work out solutions to problems, and I think it’s safe to assume that they’re simple language does not affect their perception in the same way ours does. So if language came out of a necessity to describe the world around us, at what point did it start informing our perception of that world? The change from how we process colors as infants to how we process them as adults may be a good indication. Our brain has a fantastic way of sending signals so that they cross as much of our brain as possible. The visual data you take in through your eyes gets processed in the very back of your brain, and the signals actually criss-cross, with your left visual field being processed by the right hemisphere of your brain and vice versa. This, along with the shift of the color processing from one side of your brain to the other as you develop, is a good example of how the brain works. It likes to send information to all corners of the brain, so that those separate parts can all collaborate on processing the information and informing your perception of it. Thus we think using associations. When we see something, we associate it with other things we’ve seen, heard, felt, learned, or experienced. This is why symbols and language are the basis of our understanding of the world.

But why do our brains work like this? Wouldn’t we be more efficient and productive if we processed information the same way a computer does? Why would we evolve in a way that is counter-productive to our survival? It may be that there is something more important and essential to our survival in drawing associations between vastly different things, than simply processing the data in front of us logically. If we start to look at how we perceive the world, the limits of our perception become clearer and clearer. Take the visual spectrum as an example again. Humans see the spectrum as starting with red and ending with purple. These are the lower and upper limits of what we interpret as visual light. However, the spectrum continues far past those limits. We know that infrared and ultraviolet light can be seen by animals such as insects and snakes (and Graboids), but we cannot see those wavelengths ourselves without special instruments.


Our bodies interpret infrared light as heat and ultraviolet light actually damages our eyes. But the spectrum goes on from there. We use microwaves to heat our food and radio waves to send signals across vast distances. In the other direction, we use X-rays to see through our own skin and Gamma rays can mutate our bodies in disgusting ways. So we interact with different wavelengths of the same energy in vastly different ways. Thus, the data we receive from the outside world is always perceived qualitatively, i.e. as different effects on our bodies, though in a logical world, they are simply numbers on a scale. So it would be counter-intuitive for us to simply perceive the world as quantitative data.

And so we have invented a new language called science by which we attempt to understand the quantitative nature of the world.  Science allows us to broaden our perspective, because it breaks the barrier of what can be put into words and what can’t.  Those aspects of our universe that can’t be described with words are put into numbers and formulas that allow us to interpret data in a way that we can understand it.  There is a misunderstanding that science is the ‘language of the universe,’ but this is not so.  The universe does not have an inherent way for us to understand it, science is merely our attempt to do so.  If science were the language of the universe, we wouldn’t have the discrepancies that we encounter when we try to merge our scientific laws together.  Einstein was convinced, as are many others still, that one day we will find a Unified Field Theory, basically one scientific theory that successfully describes all of the fundamental forces and elementary particles.  However, the more we search for this theory, the more we find that scientific fields do not merge easily.  Physics work differently at an astronomical level than they do at our level, and even more differently at the subatomic level.  As we delved deeper into quantum mechanics, it became more and more obvious that scientific laws don’t apply at every level of the universe.  Subatomic particles are known to pop in and out of existence, exist in two different places at one time, and behave as waves when they are in fact particles.  This has led some theorists to believe that there is no unified field theory, that each field of science only applies within that field.  This theory makes sense if we think of science as a language.  Just like you couldn’t expect to speak English and have someone who only speaks Spanish understand you, you can’t expect to apply quantum-level science to everyday life.  The two languages are not only different, they are mutually exclusive.  So the more we develop our scientific language, the more we will understand of the universe, but we will never be able to fully comprehend its vast intricacies.

So if our understanding of the world is based on our language, how can we begin to understand the inexplicable, that which cannot be put into words? Wittgenstein (this blog’s honoree) had a simple answer for this. He said, “Whereof one cannot speak, thereof one must remain silent.” Meaning quite simply that if you can’t logically talk about it, don’t. Any attempt to do so is an exercise in futility. This may seem like an easy way to brush off the question, but if we put it into context with the rest of Wittgenstein’s life, it may give us some more insight into what he meant. Wittgenstein devoted his life to logic and analytical philosophy. He was convinced that logic by its nature could solve all problems of philosophy. Wittgenstein’s biggest achievement was his Tractatus Logico-Philisophicus, which he wrote in the trenches of the First World War. The Tractatus is a short, enigmatic puzzle box of a book that reads more like an instruction manual than a book on philosophy. His philosophical statements are put simply, with no explanation or examples of what they mean, because frankly Wittgenstein didn’t give a shit whether anyone understood it, even his best friend and mentor Bertrand Russell. He starts off by saying that, “The world is the totality of facts, not of things.” With no examples, allegories, or any other helpful tools to decipher the meaning of this, we must simply read on and hope that it will all make sense soon. He goes on to break the world down into what is and what is not “the case,” meaning facts that make up the world are either true or not true. With these two ideas put together, we can start to see how Wittgenstein saw the world. Instead of seeing a red ball on a table and saying, “There is a red ball on a table,” Wittgenstein would have us say, “The fact that a ball exists is true, the fact that a table exists is true, the fact that the ball is red is true, and the fact that the red ball is on the table is true.” This view of the world as being made up of facts instead of objects, once extrapolated, is a beautiful way of merging the world of abstractions and the material world into one world of logical thoughts that is entirely dependent on our thinking them and putting them into words. And yet when we come to the end of the Tractatus, Wittgenstein starts to contradict himself. He says, “The correct method in philosophy would really be the following: to say nothing except what can be said, i.e. propositions of natural science—i.e. something that has nothing to do with philosophy.” This almost seems like a self-abasing joke at the end of his masterpiece. After a long discussion about philosophy, he comes to the conclusion that the only things that can be talked about logically are things that have nothing to do with philosophy at all. He confirms this with his statement, “My propositions are elucidatory in this way: he who understands me finally recognizes them as senseless.”  This new view of philosophy as being ultimately futile came to characterize his later work.  He described all of philosophy as mere “language games” which, when played, may help us understand the world, but in and of themselves are meaningless.

And so it is the task of a person seeking the truth not to let language limit his perception, but rather to enhance it.  This means fully understanding the scope and purpose of language, but also realizing its limits and its effect on our comprehension of the world around us.  Let us never forget that things are not limited by their definition, but rather that our perception of that thing is; and that we may never fully understand the universe, but we may better connect to it by dismissing our definitions of it.