B.I.T. (Beta Test) Part 1: Memory

This is an excerpt from an attempt to expand an idea I started with this short story.  I’d love as much feedback as possible.  Enjoy!

I think I was part of the last generation that didn’t grow up with BIT. I mean it was around, but nobody really understood how to use it. It wasn’t everywhere like it is now. I’ve seen kids with chips that couldn’t have been older than 10. That’s just crazy. You don’t even know how to use your brain yet. But I guess it’s just progress. I mean I’m glad I have my chip. I think that BIT is probably mankind’s greatest achievement, but I’m glad I had my brain to myself for as long as I did.  Take nostalgia for example.  With post-chip memory it’ll be a thing of the past.  All those little annoyances of an experience that fade away as you get older, they’ll be there in vivid detail, so no nostalgia bias.

See, one of the first mainstream chip features was memory upgrades.  Problem was everyone did them differently.  The first ones all tried basic SDB, sensory data backup, which is basically just recording the standard sense-data to the cloud.  But then to recall the memory, you had to redirect the neural pathway from the hippocampal formation to the memory database in the cloud, which everyone soon found out was a bad idea.  With kids it was ok, their brains are still pretty plastic and open to rewiring.  But in adults, this is one of the strongest neural pathways in the brain.  So strong it would actually override the chip’s rewiring, which consumers saw as paying for shit that didn’t work.  So then they tried storing the sense-data in the hippocampus, but that got tricky too, because the brain already stores the memories itself, so they had to find a way to attach the sense-data to the memory the brain created.  Problem is, the way memory works, the brain only really stores the information from the memory that it feels is important.  Over time, the more you recall the event, the information you use from it once you recall it, that’s what the brain reinforces.  So if you don’t use all the sense-data stored for the memory – and who does? – the brain naturally dismisses it.  Which, again, just comes off as faulty programming.  So then someone said, ‘Well why are we trying to store information where the brain stores memories? Why don’t we just store it where the brain naturally stores information?’  Which, yeah, it’s obvious once you know it, but just no one had thought of it yet. So what we did was send the sense data to the parahippocampal cortices, which is where the brain stores semantic memory (facts, information, data), then all it took was strengthening the neural connection between the parahippocampal cortices and the hippocampus itself, where information is stored and where the memory is stored, which already existed. Just fire a few hundred neurons through it at the startup and the brain automatically reinforces it. Then it stays strong from there because it’s actually useful information to have when recalling memories.

So what resulted was two different sets of memories, pre-chip and post-chip, though technically it should be pre- and post-SDB, but no one says that.  The pre-chip memories are still vague and fuzzy like natural, but the post-chip ones are fully-detailed, vivid.  You can almost relive the moment in a weird way.  What’s really weird is when you recall your pre-chip memories, the recollection gets stored by the SDB processors, so you have a post-chip memory of recalling a pre-chip memory. This really made people realize how faulty pre-chip memory was, because they could look at every time they recalled the memory and see that it changed each time.  And the post-chip hippocampus tries to assimilate all of these different details into one memory, but it can’t.  They’re too different, and they contradict each other.

Anyway, the younger you get the chip, the less of those memories you have.  And kids are getting the chip younger and younger.  Pretty soon they’ll just be putting it in babies and no one will even remember what natural memories were like.  I’m just glad I was born when I was.  I like my pre-chip memories.  Somehow they feel more real, even though they’re actually less accurate.  I think there’s something kind of special about that process.  Maybe it’s just the thought of it being extinct that makes it feel that way, but I don’t know.  Maybe we’re supposed to forget certain things.  Maybe the past is supposed to look different every time you remember it.  I mean, our brains could have adapted to keep memories exactly as they happened, but it didn’t.  I’m not saying there’s a reason, like intelligent design or anything like that, I’m just saying the brain saw some reason to do it this way.  But I guess we know better than our brains now, don’t we?

Advertisements

Where Is My Mind?

 

As scientific research exponentially expands and progresses its reach and grasp, the role of the philosopher has become somewhat marginalized.  In ancient times, it seemed almost a prerequisite for scientists to also take part in philosophy, hence the greats like Aristotle and Pythagoras.  But now, as science becomes much more complicated and all-enveloping, the scientists of today hardly have time to sit back and process the information they are discovering.  As the scientists spend long nights crunching numbers, it has become the role of the philosopher to put the information that science discovers into context for the laymans, those of us unwilling or unable to do the number-crunching.

One long-standing problem of philosophy is that of consciousness.  Since the dawn of philosophy, thinkers have tried to find the right place to put consciousness in our logical picture of the world, and have had nothing but trouble doing so.  The majority of our logical reasoning is about the material world, which appears to behave more or less by logical principles.  But when it comes to placing consciousness, philosophers have more often than not steered away from materialism and placed consciousness in the realm of the metaphysical.  But as logical people have continuously done away with the metaphysical, we have tried harder and harder to pull our consciousnesses out of that realm and into our logical picture of the world, but still to no avail.  The philosopher most often cited when it comes to these matters is DesCartes, who championed the concept of dualism.  Cartesian dualism asserts that the only thing we can know exists is our own consciousness, yet that consciousness cannot be said to exist in the physical world.  So we are left with both the empirical view that nothing but our consciousness exists, and the materialist view that our consciousness doesn’t exist.  But dualism is a hard pill to swallow for many.  It flies in the face of our need for everything to fit into a logical picture.  This has caused many people to dismiss consciousness as a by-product of brain function, the end result of data analysis.

Enter neuroscience: a complex and quickly-growing branch of biochemistry that attempts to map the events that occur in our brain under certain circumstances.  The more we map out the processes of the brain, the more advocates of a metaphysical mind have had to strip down the definition of consciousness.  Things like memories, emotions, and even some abstract thinking have now fallen under the category of what can be explained through materialistic neuroscience, causing advocates of the physical consciousness to theorize that one day all of consciousness will be defined by physical processes of the brain.  This becomes the fulcrum of the debate, the materialists claiming that just because we haven’t found a physical explanation for consciousness yet, doesn’t mean that one doesn’t exist, and the metaphysicalists(?) stating that the true definition of consciousness evades physical science.

One contemporary philosopher who has championed this debate is David Chalmers.  Chalmers has done a fantastic job of defining where we can draw the line between the physical and metaphysical consciousness.  He has dubbed these two categories as the ‘easy problem’ and the ‘hard problem’ of consciousness.  According to Chalmers, the easy problem of consciousness describes the entire process of data analysis, while the hard problem has to do with subjective experience.  While materialists claim that subjective experience is the end result of data analysis, Chalmers believes there is a fundamental difference.  This difference is something he calls ‘qualia.’  Qualia is the subjective experience of sense-data.  For example, as your eyes take in a certain wavelength of light, your brain processes that wavelength (perhaps incorrectly as discussed in the Limits of Language post) as the color red.  But the physical data of the wavelength has no correlation to your definition of ‘red’ in your mind.  One simple thought experiment to better grasp this concept is to imagine a person whose color spectrum is somehow switched.  This person would see red as violet, and vice versa, and, similar to the negative of a photograph, all the other colors would follow suit.  Now this person would grow up learning to call what we define as red ‘violet’ and so on.  There would be no way to tell that this person’s color spectrum is switched, because there is no way to observe his subjective experience.

While Chalmers has given us a terrific vocabulary to discuss this debate, I think there is an easier method to understanding the difference between our brains and our minds, and that is the struggle between the two.  We humans have always and eternally waged a battle with our brains.  We know full well that our brain can play tricks on us.  Our data analysis processes can lead us to false information, yet we can be fully aware of it.  For example, when we watch a magician or look at an optical illusion, we are willingly participating in a presentation of the fallacy of our minds.  We are fully conscious of the fact that our data analysis is feeding us false information.  We’ve entered into a reciprocal process of data analysis where we let our sense-data deceive us, yet use the knowledge of the deception of our sense-data to put the illusion into its proper context, so that we don’t think the magician is some kind of demon.  This knowledge of the fallacies of our brain functions permeates the rest of our lives as well.  As psychological theories have entered the common vocabulary, the contemporary person may be well aware of his or her own psychological idiosyncrasies, and behave accordingly.  The common phrase, ‘the first step to recovery is admitting you have a problem,’ is a perfect example.  The knowledge of a fallacy of the mind, and the definition of it as such, allows our consciousness to take that knowledge into account when making a decision, and choose whether or not to act on that fallacy, or rise above it.  For example, if we recognize the times we let our emotions govern our decisions, then the next time it happens we can choose to ignore our emotions and govern ourselves according to our reasoning.

Now none of this unequivocally proves the metaphysical mind, but it is a rather interesting notion that we can be ‘conscious’ of the fallacies of our data-analysis processes.  The question begged here is whether or not this is simply another level of data analysis, or if the knowledge of these fallacies is evidence of a transcendent consciousness.