Further discoveries in biological usage of quantum computing

September 13, 2014

Just a little more research discovered the following as well:

Nanotubular structures similar to post-synaptic neurons (persistent and dynamic) have been discovered in immune T-cells in humans as well as in plant cells,  They are being called plant synapses and tcell or immune synapses by researchers.
How does the body recognize the thousands, hundreds of thousands of virus’s it must be able to attack?  How does it decide what to do in response to an attack?  Not a trivial activity.   It would be consistent with this theory that these junctions are a site for memory, decision making using quantum computing similar to brain neurons.
As a programmer I have always been amazed how much programming would be required to “operate” a human body.  The human body is not just like a robot.  A robot is programmed to do things if everything goes well.  If something fails on a robot it has no mechanism to repair itself, to discover environments that are dangerous or to learn to adapt to its environment.   A computer may be book smart but it is not able to deal with any physical attack on itself.
A human body has to persist to be able to pass on genetic material.  It has to be able to sustain all kinds of environmental conditions, all kinds of attacks, damage, different situations which lead to too much of this, too little of that, situations which need different responses.  As a programmer I have imagined the equivalent millions/billions of lines of code or “intelligence” that would be needed to have the monitoring system, the recognition, the response system.  If you understand software and you understand how complex it is to specify all these behaviors it is mind boggling.
It is much worse than that because this code must be changeable.  It’s not like you could discover the rules to keeping a body working, how to repair this or that and that’s it. Write the program and done.  No, you have to have a system that not only learns or knows the millions of scenarios and how to respond but it has to learn new ones as they come up.  Evolution by itself seems too clumsy.     It’s not surprising we would find that there is pattern recognition and decision matrixes like are at postsynaptic neurons in other parts of the body like the immune system.
Life has been present on the earth somewhere in the range of a billion years within an order of magnitude.  The average lifetime of a creature could be a year but let’s say to give evolution a chance we say that every day evolution could evolve.   That’s 300 billion opportunities for evolution to make a decision and kill something or not.   Of course for evolution to work there has to be more than one member of a species killed and there has to be new progeny and that progeny has to face the threats and survive.  If nature could take these decisions very carefully it gives nature maybe a few billion decision points.   This may seem like a lot but consider the complexity being proposed for evolution to figure out.  I am not at all a creationist but let’s face it numerically it doesn’t seem reasonable that the complexity of life we see around us could evolve in this number of cycles.   There were periods in this billion years when vast amounts of life were wiped out and where evolution wasn’t operating very rapidly.    It’s just not possible that everything is explained as evolution.   A quantum recognition system would lead to a much more rapid evolution possible.  I am not sure what the connection or ways in which a pattern recognition system could be linked with DNA and evolution but I am certain that there must be some synergy between them that enables a faster pace to evolution than can be obtained from the mechanism of killing off or enhancing reproduction on a binary basis.
It’s obvious to me that studying these nanotube structures in plants, animals, neurons, immune systems is a very important line of research.  Understanding exactly how nature figured out to do nano-quantum computers, to store patterns, to recognize patterns, to decide how to direct activity based on decoherence is going to take our understanding of nature and the world around us leaps and bounds forward.  It will also give us an idea how to build and advance quantum computing at a fantastic rate much faster than our clumsy approaches trying to understand basic physics and building them from basic principles.  Nature has obviously figured out how to build these cheaply, to leverage quantum computing at the billions of processors level.
In the book hyperion, the author had the idea that the computers used humans as computing vesicles to offload some of their processing.  The idea seemed stupid when I read it.  Now I see that it was prescient.  If the human body/brain has trillions of quantum computers leveraging them would be a quick and easy way to get lots of computation done.  However, presumably doing so would damage the normal function of those computers.  It’s not just a matter of stealing cycles from the mainframe.  Utilizing those synaptic junctions would also steal them permanently from the human body.
This leads to a different idea for a scifi book:
What if someone started leveraging human brains, i.e. obtained from babies or otherwise grown to build massive quantum computers for nefarious purposes?
That seems like it could be an interesting basis for a compelling book.  :)

The human brain is undoubtedly a billion quantum computers

September 13, 2014

http://www.sciencedirect.com/science/article/pii/S1571064513001188

Roger Penrose more than 10 years ago said this was how it worked.  I have spoken/thought about this before I knew Penrose thought of it but Penrose obviously put the meat to the bone.
It’s been obvious to me for some time that the brain is a quantum computer.  For me the reasons are the following:
1) After 30+ years of trying to build learning machines we are basically still not even at the chicken level of intelligence or learning.
2) After 40-50 years we have not identified what brain EEG patterns represent, how the brain stores memories, how it pattern matches against memories, how it learns things other than a gross idea that somehow neuron firings seem to be potentiated at synaptic junctions.
3) It is clear that even with billions of neurons there is no clear way the brain could store the vast amount of sensory information or even the summarization conceptualization of all the information it gets in the neuron system that has been proposed.   It just doesn’t seem quanitfiably possible.
We have no idea how memories are stored in the brain.  I mean the actual memories or concepts or whatever.  The model proposed of the synaptic potentiating is a crude way decisions could be made but it doesn’t describe how the actual memories are stored, i.e the actual sensory inputs which must be matched.  Over a lifetime of vast inputs millions of sensory inputs/second there is an unbelievably large storage capacity and recognition capability but we’ve never found the hard drive.  How do you have a computer without the hard drive?  How do you say you understand anything about a computer if you don’t know where the hard drive is?
4) There has been no obvious mechanism of pattern matching clearly needed that has never been located or described how it could work.  One author I read who studied the brain said it was just a pattern matching machine.  I agree 90% of what the brain does seems to be explained and understood as having to do this.  However, nobody has been able to show how the brain could possibly do this.
5) When we think it happens in leaps, i.e. we don’t think linearly.  Our brains seem to operate more like the decoherence of a wavefunction and collapse to an idea rather than some logical process of a, b, c.
6) studies have shown that anesthesiology which suppresses consciousness works by suppressing microtubules in the brain
7) It is obvious to me that animals and even cells exhibit neural capabilities that are beyond our ability to explain.  Individual cells are able to learn and react to stimuli.  If you look at animal behavior it is clearly creative and not automatonish.  They are not stupid machines.  They have feelings and react in many cases intelligently.  They can be trained.
8) we have discovered that plants take advantage of quantum effects at high temperatures to improve the efficiency of capturing photons from the sun to use to power photosynthesis.  Using molecules called clorophores they are able to create a superposition of the electron to transport single photons.
9) More recently a study showed that birds who migrate when subjected to a magnetic field too small to move a single iron molecule were completely unable to fly along their paths.  When the magnetic field is turned off they can go again.
10) Quantum properties have been identified in mircrotubules of the brain this year
11) I have had dreams where the complexity of the entire dream is as if it had been planned far in advance requiring active planning in advance of the events in the dream even though they seem to happen spontaneously in the dream.
All of these things have led up to this year a conclusion that the operation of the brain as a quantum computing machine to my mind has been proved.
This is amazing because the brain consists of billions of neurons, operating at room temperature and sometimes well above in a wet environment theoretically very difficult for quantum effects to be sustained for any period of time but it now appears clear that evolution has been able to use quantum computing all along from the plant times.  It is therefore hard to understand how people would resist the obvious idea that nature has evolved to use molecules, atoms, DNA to fabricate quantum machines that leverage quantum effects.
The article is very detailed in how Penrose proposes this works although it is still missing key pieces but it moves the ball forward dramatically.
It is now clear that brain waves and the patterns we see are results of the decoherence and Penrose even is able to calculate some frequencies that match results we have observed.
This explains for instance the improvement in intelligence observed from meditation.  Meditation or calming the brain seems to slow the beating of the brain and bring more coherence.  In Quantum physics what this means is that the neurons ( or microtubule components ) are able to stay in a state of superposition longer and thus able to make more powerful pattern matches and optimizations.    This is logical.  I have never understood why slowing the brain would increase intelligence.  I could never understand how quieting or meditation could possibly affect brain performance.  Now we understand that anything that can lengthen the time of superposition the more the quantum magic can happen.  The more states that can be searched.
So many people have attacked Roger Penrose who in my opinion is one of the undisputed geniuses of our time.  He has done things like prove the existence of Black Holes with Hawkins, contribute Spin logic to mathematics and other amazing mathematical discoveries.  Spin Logic is the basis for a new unification theory that creates quantization of space as well as the forces.   This seems to me the most likely way that a unified grand unified theory will be created.  He has contributed the mandelbrot to art and physics and mathematics and many other things.  He has worked with Feynman and others on creating Penrose diagrams important for representing events in spacetime.
This guy is probably smarter than Einstein and he thinks its proved.  I think its proved as well.  It has to be.  However improbable something seems if it is the only way it could be done it must be the way it works.  The brain was unexplainable before.  This brings the possibility of understanding how the brain could possibly do what it does.
COnsider that for $10M you can buy a DWave computer with 500 qubits.  The human brain consists of billions of neurons each with who knows how many qubits but probably way way way more than 500.  We are saying the human brain has the quantum computing capability at least a  trillion times the speed of the DWave which recently was shown to be 5,000 times faster than networks of thousands of traditional computers at pattern recognition.
The brain is truly an awesome machine we can now see we are a long way from building a machine capable of being competitive with a human brain.   That’s reassuring as a human that we aren’t likely to be replaced by machines anytime soon.  I worried about that in the early days of AI.  This discovery makes me feel like at least it will be a long time before we humans are disposable by the machines.

Physics thoughts … more

September 8, 2014

Continuous math is a problem as it doesn’t seem it could possibly correspond to reality

I still have an enormous problem with the fact that 16 or 32 “Parameters” exist in Physics to describe our reality each of which appears to be fine tuned.

I suspect that in determining the probability of life as we know it to exist there are far more conincidences and incredible things that had to happen.  The fact is the more we learn about life and the universe the more unlikely our existence seems

The idea of gravitational collapse is interesting

What about “knowing and not-knowing theory”?

How does the existence of a working quantum computer or not affect things

Is there a theory of infinite complexity?

How does computability, completeness, levels of infinity fit into all this

how does mathematics relate to reality.

Is it formulaic mathematics or algorithmic mathematics that maybe describes a universe?

Could quantum physics be an algorithm not a formula?

What is time?  A dimension?  An algorithmic step?

Is there an experiment to figure out if we live in an algebraic or algorithmic universe?

Xenos paradox complements this

http://www.bottomlayer.com/bottom/argument/Argument4.html

http://www.bottomlayer.com/index.html

space and time must be quantized

renormalization related to trying to treat spacetime as continuous

how can spin and other quantized quantities change with only certain values

non-locality?

Does the ability to represent distance based on guage mean distance is irrelevant?  Is locality a issue if you assume scale is irrelevant.

Genetics thoughts

September 8, 2014

Genetics bothers me for several reasons:

How many chemicals does the body process that are raw vs are produced by the body?

How does the body/cell regulate the quantities of various chemicals to maintain homeostasis?

How does the body / cell decide which genes to activate in which quantity when?

What are the subsegments of genes that form the machines and how many of these are there?  Do the machines do different things?  Wildly different things?

How do multi-cellular functions and interactions occur and how are those programmed?

What are the number of possible input variations and ways in which the body has to react?

How does the brain interact with the basic body processes?

What things are programmed into the brain?

How many different organs, cell types, different configurations of basic building parts are there?

—-

If you add these all up the amount of information that has to be encoded in the genetic code is impossible it seems to me.

Can we figure out how many genes vs switches and controls there are?

Are some of the genes producing protien machines that can act in complex ways, i.e. measuring something and activating or doing something else

Can we figure out a numeric way to quantize the complexity of the body operation or amount of programming required to run a human body or build one

What is the purpose of the microtubules?

Are there microtubles in the DNA?

How could memory be stored in quantum universe?

How could pattern matching be done by a quantum system?

 

this is an amazing creature that rebuilds its DNA and merges with other organisms

http://www.eurekalert.org/pub_releases/2014-09/pu-ioo090814.php

 

 

How to start the problem of AI

September 8, 2014

A minimal AI system:
It must be of a certain size to percieve a domain of a certain size:  a 40 bit system can’t understand a billion bits.  There must be some information limit to the size of brain capacity to understand something of a certain complexity.

There must be robust set of inputs that provide lots of data to the system

There must be a robust ability to interact with the environment so the system can cause action and then see results to validate generalizations made

The system requires a powerful pattern matching scheme

The system requires a powerful generalization mechanism and ability to correct bad generalizations, unlearn them

The system requires that generalizations be made close to the source and then processed automatically so that only the generalizations pass up or that the generalizations with exceptions are passed up (this is like x but differs in y and z)

The system requires a lot of memory of sequences of generalizations and specific data that can be recalled

The system requires being able to link different generalizations with other generalizations and other inputs and these can be used to link together and form other generalizations

The system requires a motivation to do anything and to self-correct

The system may require active teaching because it may require lessons planned in advance and ways of testing if it is producing good results to facilitate higher learning

The problems to try to explain:

dreams that seem planned in advance

how ideas are formed in dreams to solve problems in real life

how “aha” moments happen

How can the brain learn to do things like complex physical activities that seem to take small fractions of a second and integration of various inputs be processed in milliseconds in coordination with physical activity that is fine tuned

Does the body itself learn without the brain?

What kinds of metrics can be applied to understand the scope of the calculation, the pattern matching required to do an activity x

 

The problem of Artificial Intelligence is much harder than people realize

September 8, 2014

This is the same mistake that was made in early AI research.   You like the early pioneers in the field mistook simple programming algorithms that made a computer look smart to an actual human type intelligence.  Marvin Minsky laid bare this falsity in early 80s and the field collapsed for decades.

A human does not examine the cloud (his brain) for sequences of data and produce a result as a computation.    You are mistaking being able to produce fast computers and smart algorithms for actual learning.  A human is a general purpose learning machine.  It starts with nothing and understands everything around it through the input from its senses.  It forms the questions and deduces generalities across a wide spectrum of input sources.  These patterns the brain recognizes go up dozens and dozens of levels of conceptualization that so far are beyond and obvious algorithmic understanding.  We have never been able to decipher the precise process of learning.  We simulate learning by rote algorithmic processes in computers that have the limitations that they are always based on “OUR” preconception of the world and how the process should work.  No mechanism is understood that could do this generally.

Numerous scientists and computer genius’s have tried to form neuronal network systems but so far as I am aware they are good at doing things like recognizing borders of a box or detecting patterns in data that we are looking it to recognize but no generalized framework exists for that pattern recognition to somehow grow to be something that is persistant and growing and multi-level or stretching across multiple senses, multiple categories of types of learning at the same time and then generalizing from there.

Maybe it is a scale problem.  Maybe it is just a matter of trying some neural network program across billions of simulated neurons and letting the thing run for years.  Also, we would have to give it robust input source with millions of data points every second to process.  Maybe then such a neural system would “learn” and show the kind of growth that we see with humans but part of the problem is that you can’t just be an observer in the world.  It is not clear that learning can occur by simply observing.  Interaction with consequences seems to be a critical part of learning or even in humans we get lots of bad learned concepts and wrong generalizations.  Similarly there must be a feedback system in any neural system where the neural net must be able to control things to affect its input so it can see the effect of its output and then it must have a motivation for such action.    I think it is possible that no learning machine can be built that doesn’t have a minimum of a lerning matrix, a set of robust inputs, a set of robust outputs, a motivation system.   It is also possible there must be some “teacher” element for intelligence to go beyond a certain level by itself.  A teacher serves the purpose of guiding the development by setting up scenarios and providing feedback where the environment might not provide it.

Another thing has troubled me about the whole issue of learning, cognition, intelligence.  It is not clear how “aha” moments happen.  Sometimes the brain makes “leaps” of cognition where it pieces together an unbelievable number of past inputs and directs itself to find the “answer” by “thinking” and the process somehow has moments where “ideas” pop in to the brain.  These ideas are beyond our understanding.  It requires a consciousness in which there is directed thought that percieves the context and without thinking conciously about things somehow produces an answer out of apparently nothing.  Sometimes this happens in a dream where conscious thought appears absent.  Yet possibilities are enumerated and eliminated without consciously doing so.  This could be a form of pattern recognition but it happens without conscious thought.
Another example of this is in dreams I sometimes find that the dreams demonstrate dramatic examples of having been pre-planned.  Things in the dreams happen earlier that later turn out to be discovered in the dream to be essential to the later events in a way that would have taken a lot of thinking to plan in advance.  I have sometimes written computer code seeming to know in advance how many lines a certain amount of code will take which required a substantial amount of thinking in advance which I was not consciously capable of.  This happens with proofs and numerous thinking exercises where the brain seems to operate below conscious thought producing the result without obvious computation.  One could say this is simply the machinery of our pattern matching algorithms but if so the sophistication and complexity of this is staggering.  It is hard to imagine how to replicate this with any algorithmic process.

What you have described above is learning where the domain of learning is known in advance.  We set the algorithm of how to operate in advance.  The human computer can take any form of input it seems conceptual or physical and process it to produce new conceptual models.  It could be a game or trying to understand the universe, probing abstract math, designing complex computer systems.  We are so far from having computers able to even start to start to start on problems like this.   I think you trivialize the brain.

As an example of how little we know there is still after 50 years of searching found the basic means the memory.  We have several possible places memory could be stored.  We have several ways it could be done.  The fact we cannot even locate where the data is stored after 50 years is perplexing and surprising.   We have a LONG way to go.  I realize our ability to do this is growing exponentially but our progress is zilch in the face of the amazing growth of our knowledge of nature, our tools and understanding.  We have tried to build “smart” machines for 50 years but the best we can do is have smarter programs which know how to process information and algorithms faster.   Our algorithms are faster, better, but the basic problem of learning is completely wrong.  The way we learn to recognize patterns in faces, speech, etc.. are totally different than the way humans apparently do it and these things are frequently good but when they make mistakes the mistakes are awful and stupid.  Humans rarely make those mistakes.

The funny thing is I was very depressed in early in life thinking we would build smart computers.  Although it was my passion to want to do it the thought of smart computers scared me a little and made me feel kind of like it might be dangerous or worry about a lot of big questions.  The lack of progress allowed me to forget those negative thoughts.  It’s become apparent this is WAY harder than we thought 40 years ago.

 

http://www.digitaltrends.com/cool-tech/this-is-your-brain-on-silicon/

http://www.artbrain.org/perception-selection-and-the-brain/

 

Global Warming Debate

July 2, 2014

Whatever the result of all the study of the numbers today and the way in which calculations are done, what errors were made the thing you are all missing is that the fundamental ongoing issue is data quality! Assuming we debate and eventually conclude what the correct methodology for handling the data are in terms of computing averages, etc the fact remains that every day as new data are entered and things change (however those changes may come about for whatever reasons) if you are depending on those numbers for serious work you need to have tools to insure data quality.

What does that mean? It means that the NOAA and other reporting agencies should add new statistics and tools when they report their data. They should tell us things like:

a) number of infilled data points and changes in infilled data points
b) percentage of infilled vs real data
c) changes in averages because of infilling
d) areas where adjustments have resulted in significant changes
e) areas where there are significant number of anomalous readings
f) measures of the number of anomalous readings reported
g) correlation of news stories to reported results in specific regions
h) the average size of corrections and direction
i) the number of various kinds of adjustments, comparison of these numbers from pervious periods.

What I am saying has to do with this constant doubt that plagues me and others that the data is either being manipulated purposely or accidentally too frequently. We need to know this but the agency itself NEEDS to know this because how can they be certain of their results without such data? They could be fooling themselves. There could be a mole in the organization futzing with data or doing mischief. Even if they don’t believe there is anything wrong and everything is perfect they should do this because they continue to have suspicion of their data by outside folks who doubt them.

This is standard procedure in the financial industry where data means money. If we see a number that jumps by a higher percentage than expected we have automated and manual ways of checking. We will check news stories to see if the data makes sense. We can cross correlate data with other data to see if it makes sense. Maybe this data is not worth billions of dollars but if these agencies want to look clean and put some semblance of transparency into this so they can be removed from the debate (which I hope they would) then they should institute data quality procedures like I’ve described.

Further of course we need to have a full vetting of all the methods they use for adjusting data so that everyone understands the methods and parameters used and can analyze, debate the efficacy of these methods. The data quality data can then insure those methods appear to be being applied correctly. Then the debate can move on from all of this constant doubt.

As someone has pointed out if the amount of adjustment is large either in magnitude or number of adjustments that reduces the confidence in the data. Calculated data CANNOT improve the quality of the data or its accuracy. If the amount of raw data declines then the certainty declines all else being the same. The point is that knowing the amount of adjustments, the number of adjustments helps to define the certainty of the results. If 30% of the data is calculated then that is a serious problem. If the magnitude of the adjustments is on the order of magnitude of the total variation that is a problem. We need to understand what the accuracy of the adjustments we are making is too. We need statistical validation continuing (not just once but over time continuing proof that our adjustments are making sense and accurate).

In academia we have people to validate papers and there is rigor applied to an extent for a particular paper for some time on a static paper. However, when you are in business applying something repeatedly, where data is coming in continuously where we have to depend on things working we have learned that what works and seems good in academia may be insufficient. I have seen egregious errors by these agencies over the years. I don;t think they can take many more hits to their credibility.

The Brain and our state of understanding NOT

July 2, 2014
http://www.technologyreview.com/featuredstory/528131/cracking-the-brains-codes/
Here is an article telling us the state of the art of understanding the brain.  Some people are waiting for the singularity which is the point at which the brain can be read out and stored in a digital form so that we can store and replicate individual human minds.   From this paper it is clear we are far from that day.
The paper is trying to be optimistic but I think it is clear the big distinction that needs to be made is between the brain “contents and cognition” versus observation of the brain action on the external world and input signals.  The former is completely obscure whereas we have an ability to observe the latter.   We shouldn’t be so stupid as to think that the fact we can see a signal from the brain going down a nervous pathway doesn’t mean we understand any more about the how that signal was created, i.e the cognitive process inbetween. What we have today is observed using electrical methods indicators of what kind of information the brain is receiving from our senses and ideas about what the brain does electrically that seems to cause muscle firing and movement of the human body.  We also seem to be able to recognize some other blunt possibly secondary effects in the brain that seemed to be an indicator of the end result of the brains activity.   The basic point is that the actual operation of the brain to store, create higher level concepts, recall exactly, correlate information, how the brain has a consciousness, a seeming direction of thought and sense of identity, the ability to process information at unbelievable speeds in some cases (for instance an athlete performing very complex actions with the body and senses in tandem with incredible precision all of these things seem completely impenetrable.
In the article the author talks about how individual neurons seem to be responsible for recognizing whole people.   Given the brains input is a series of spikes how is it possible a single neuron gets the information to encode something as specific and general as a particular hollywood star for instance?  How would a single neuron have enough complexity to record or detect things like this?
The article mentions over 1000 neuron types and that for instance possibly some types of neurons are able to detect lines or movement of lines vertical or horizontal.  This may be possible but the existence of a thousand or more neuron types and the existence of any preprogramming in the brain is a terribly complex matter.   I have always been staggered by the low number of genes in the human genetic code.  I now understand genes are simply blueprints for building nano-machines.   Fragments of a gene code up components of these machines and so the machines are composed of common building blocks consisting of repeating patterns of DNA that construct levers and detection mechanisms.  A gene is simply a factory for making a particular type of machine.  There is separate coding in the billions of other DNA fragments that direct how many machines to make, when to make the machines etc.   Therefore what was considered junk DNA before is now considered the most crucial DNA because it is where the actual instructions for operating the factories and the machines that operate the body.
There is an information problem I don’t think anyone has really thought through which is the sheer complexity of all the chemicals and the machines and the different cell types, the processes to operate cells, each organ or group of cells and the instructions to manage the interoperation of this giant machine must be extremely complex code.  This code is not like a fixed computer program which is unchangeable and breaks with the slightest unconsidered input.  This machine has adaptive capabilities which it can use to repair itself, to handle scenarios it has seen in the past ( possibly the far past from thousands millions of years ago that some previous DNA had to deal with) and bring together forces to combat detected attacks.  It has the ability to constantly change and to have augmentations.   We can think each cell operates autonomously but we know that in fact the system has more global capabilities and that far removed systems can be triggered and action taken, that even thoughts in the brain and mood can affect how the factories and the machines themselves work.  There is an awful lot of complexity of any machine composed of so many components, so many different components.  I can’t even imagine how many different situations the body deals with on a daily basis to maintain homeostasis to keep everything operating and repair, grow muscle, create copies of cells, copies of humans.
The point is this is incredibly complex program.  As big as the DNA is there is a question in my mind if the coding could POSSIBLY be sufficient to represent the complexity of this machine let alone trying to understand what the coding mechanism is.  Knowing how many lines of code is needed for the simplest program it is disturbing to think how much more complex a human body is consisting of so many different cell types, so many chemicals, so many parts of the puzzle and all the unbelievable complexity in writing a program to reliably manage it let alone the repair mechanisms, adaption mechanisms, retaining knowledge of past experiences and dealing with those situations when they arise again, i.e. learning.  It is mind boggling and no computer program is imaginable that could do all this.   We have never imagined let alone written anything remotely as complex.
Over and over we are faced with the problem that we don’t understand the basic language of the genetic code, the brains coding of information, concepts, the process by which instructions and action is taken, how decisions are made or processes work beyond the simplest observable action.  It is hard to believe this is all done with chemical reactions because we know the operation of these chemical reactions is slow compared to the needed speed to operate the machine, to react.    Electrical signals seem so far to be insufficient to explain the brains function.   We have studied these electrical patterns and not discerned their meaning.  Possibly there is no meaning and we are observing a secondary effect not the primary effect that is going on.
I have no answers.  I am not calling for mystical answers, i.e. god, etc.  I am simply pointing out the pathetic state of our understanding.  We are like little worms crawling around this stuff with so little understanding but thinking we have a clue.   I am reminded how it is clear to me that animals recognize and communicate somehow.  Studies have shown they can tell each other things.  They work together in some cases.  We have never been able to understand the language of animals.  If these animals are so dumb and we are so smart shoulnd’t we be able to figure out what a penguin is saying to another penguin or a dog to a dog.   I mean there is clearly something more going on here than we grasp, some more detailed language at play that we don’t seem to have a clue and can’t see the regularity of it.
Maybe we are closer than I think and if we have one simple break in understanding one little thing about how the brain or the junk code in the DNA really works that we will suddenly have a path to complete understanding.  Maybe we are close.  It is just so bloody hard to see how this all works.  If the brain was a completely organized thing of some small number of cells organized in a repeating pattern we could understand how a limited number of instructions could be used to construct a brain, how it could operate with a simple set of procedures.  I remember a book I read which outlined the basic structure of the cortex.  There is a regularity to the cortex.  It is composed of 6 layers of cells that have some repeating structure.  I could see how such a structure could become a general learning machine perhaps but the more we learn about things like the eye and all these different neurons it turns out to be a lot more complex.  The cortex itself is more complex than that simple explanation.  There is a lot of variation in regions of the cortex.  How could the programming in the genetic structure be so detailed to lay out how to build a cortex and brain (forget the rest of the body).  Shouldn’t we see massive amounts of DNA related to this?  Yet 80% of the DNA of a fly is similar to a human.  The brain is composed of regions, lots of regions more than simply the cortex and 1000 cell types and many processes going on at the same time electrical and chemical.  We have discovered recently microtubules to increase the complexity of interaction between neurons.
Maybe instead of thinking of ourselves at the near ending point of learning about nature and how smart we are we should all take a humble pill and realize we really have no clue.   I believe this has come upon me when realizing where we are with our physics knowledge.
We should think of ourselves really at a cro magnon state.  We have this rudimentary ability to understand things based on our observation of macro phenomenon.  We are like early observers of the human body who talked about humours and had no idea what the organs of the body did.  We are blind to so much of what’s probably really going on which is why it seems so hard to understand how it could possibly operate.   That is exciting to think there is so much to learn ahead of us and depressing to think I will probably not be around to see it unfold.

 

Thoughts about Physics and the nature of everything

April 13, 2014

I’ve read 2 books recently on the topic of life the universe and everything.  One is called Biocentrism by Robert Lanza and the other is called Our Mathematical Universe by Max Tegmark.  Both are flawed but both made me think more deeply about the problem and while I can’t offer any new physics I have made some observations.

The Collapse of the Classical View

In the early 20th century the fundamental change that essentially de-virginized us (excuse the analogy but it is actually appropriate) occurred with the unbelievable result from the double slit experiment.  This fundamental inexplicable result which has baffled scientists to this day holds the complete collapse of the classical deterministic view of the world.   I’m not the only one.   Several physicists have called this experiment the fundamental experiment that exposes the quantum wierdness that essentially turned physics from a scholarly straightforward pursuit of linear reasoning to a mind bending that has resulted in ever more and more bizarre experiments and results that produce ever more bizarre and unbelievable theories.

Please note.  I am not criticizing physicists here for doing all this.  I have no better explanation than they do for what we are seeing but the fact is that experiment revealed that nature was far more complex and baffling than we ever imagined and we have had to construct ever more bizarre theories to explain what we see as we do experiment after experiment.

Scientists with a straight face will try to tell you the world consists of the following facts:

1) 94 % of the universe is composed of dark matter and dark energy which we don’t actually have any understanding of.   We don’t know what these things are, have never seen them and yet they are pervasive everywhere and are filling space all around us.   Yet we don’t see them, have no idea what they are.   We need dark matter fundamentally because calculation after calculation has shown that galaxies would fly apart without the addition of 5 times as much matter as all the visible matter.   Somehow invisible and around us giving this necessary boost to keep galaxies from exploding is “invisible” matter that is 5 times more than what we see.   Okay, so there are ghosts flying around us all the time but don’t worry, since it doesn’t interact with us it is there, trust us.

2) However, dark matter is still a minority of the energy in the universe.  According to our new understanding we need dark energy because simultaneously with the huge amount of attractive dark matter that keeps our galaxies together there is a repulsive energy that is pushing the galaxies and everything in the universe apart.   If we do not accept dark matter and dark energy we have no other theory or even conceptual theory plausible that could account for the observations of galaxies behavior and the undeniable fact the universe is to our surprise and mystification actually flying apart.

The data behind these observations is essentially indisputable.  It has been observed in countless experiments now that the universe is expanding faster and faster and observations of galaxies clearly shows the existence of matter we cannot account for that somehow appears to be hidden from us.  Other theories have so far not been able to work that have tried to explain these phenomenon any other way.

3) That in the first 10^-30th second of the universes existence a force called inflation caused an expansion of the universe by a factor in excesss of 10^100 times in size in less than 10^-6 seconds.   After this unbelievable sudden explosion of the universe into existence the expansion stopped.   Various theories purport to explain this inflation but the fact inflation occurred is bizarre.   It seems so contrived and convenient that this explosion happened to enable our universe.

4) That this inflation is so big that large parts of our universe today is there and we can never see it.   The universe is so large now that we assume from calculations that identical copies of the earth must exist with human beings on it like you and me every 10^…. so many light years in all directions and therefore there are virtually an infinite number of copies of you and me living lives.  This is called the level 1 multi-verse.

5) Quantum mechanics tells us that the most likely explanation for the bizarre results observed is that the world lives in what is called superposition with other worlds, a real virtual infinite number of parallel worlds in which all possible outcomes of all possible quantum states exists.   This is the level2 multi-verse in which there are an infinite number of copies of you and me living all possible combinations of lives.

6) There are 32 constants that physicists have found that cannot be tied to any other quantity by necessity.  Things such as the ratio of the mass of the electron to the proton, the speed of light, the ratio of dark matter to regular matter, the plank constant, the strength of the strong force constant and so on.  These constants appear to be randomly selected.  In fact one book says a statistical analysis has been made and they are to within a significant degree perfectly random it appears.  However, they are not random.  These constants turn out to be incredibly brittle constants.  The slightest change in any one of them would make life as we know it in our universe impossible.  Maybe life is possible with the constants slightly modified or even largely modified but we know that with only a change of less than 1 in a million in the strong force constant we would not have solar systems like we see today composed of carbon and heavy atoms.  Stars would fail to produce these materials on their explosions.   If the ratio of dark matter to regular matter were changed by even the smallest amount the entire universe would have imploded or exploded outward in such a way that no solar systems could have formed or the universe would have lasted no time at all practically.   We have 3 dimensions of physical space and one dimension of time.  We’ve known for centuries that you cannot form stable orbits in anything other than 3 dimensions.  All other dimensionality would result in no planets, no orbits, no stars.   Each of these constants appears to have been tuned to produce the universe we are in and yet we have no explanation for why these constants are what they are.   The chances these constants would arise at random to be what they are is < 1 in 10^500 according to one book.   So, the fact that these constants are selected the way they are points to an almost irrefutable result.   Either there is an explanation for why these constants are as they are because they do in fact have some law that forces them to be what they are or there must be at least 10^500 universes with all these possibilities existing in them and we are simply luckily in the one that humans can live and think.

7) We are to believe that quantum strangeness is so bizarre that it appears that doing things in the future has the effect of making things you do now different.  While this is confusing, the result is still one of these bizarre beyond bizarre ideas.  Since entangled particles have to obey certain properties depending on what we know about their behavior we can only live in universes where these things work out so that what we do in the past corresponds to what we do in the future and this entanglement forces us to be unable to do some things or see some things that we should be able to do but we cannot do them because of things we do or don’t do in the future.   This is confusing but it doesn’t violate causality.  It is simply that some possible sequences of actions that we thing should be doable aren’t.  Universes exist in which we do only a select combination of actions but not all actions are possible independently.

8) Because we have no good reason to believe that the laws of physics are unique we probably have to accept the notion of a 4rth level of multiverse in which all possible consistent laws of physics are possible.  This is of course the biggest multiverse of all.

Let me recount the truly staggering state of current results from experiments we have found:

1) Galaxies should be flying apart, so we need something called dark matter which we haven’t seen

2) The universe is expanding very fast and the only explanation we have is something called dark energy which itself is more than 10 times the energy of all matter in the universe and also hasn’t been seen.

3) We discovered a massive inflation occurred in the early universe where bizarrely the universe expanded by more than 10^100 in < 10^-10 seconds and then stopped.

4) The universe is so large now that it is virtually infinite in size not limited to the 10^13 billion light years across we thought just a few years ago.  This gives us level 1 multiverses.

5) The bizarreness of quantum physics forces us to a worldview that says that multiple infinite universes exist in superposition at any time with all possible quantum states elaborated resulting from all possible previous quantum states and ad infinitum.

6) That 32 constants have been precisely picked that result in the universe being the way it is and that these constants are all extremely brittle and our universe would collapse or be inhospitable to life as we know it with any of them changed by even a very very small amount.  The probability that these constants would result at random appears to be 1 in 10^500 which means essentially there must be a god or there are infinite universes that have other inhospitable constant values in them.

7) That time causality is more complicated than it would appear and things we think should be doable aren’t that take place at different times and places.

8) that there are probably an infinite number of universes with different physical laws possible

I will conclude this blog here and continue with where I;ve gone with some of these things and other strangenesses and bizarre things we are to believe.

Bigdata and Privacy

March 28, 2014

I believe we need to have new laws to deal with the accuracy of information being held about people and the duration that data can be held.   For instance, no company should need information about you for more than 3 years duration without your explicit permission, not permission in a 20 page “legal disclosure” but explicitly that you are okay with someone keeping data longer than that in a separate acknowledgement.  If you are under 21 it should be 1 year limit by law.   Any data kept after 3 years (1 year if under 21) must be kept in such a way that you can dispute it and find out who has such data by consulting a central registry.  Disputes of the data should be resolved to the benefit of the consumer unless the holder of such data wants to fight and prove the legitimacy of such data.   

Every company I talk to is accumulating vast information about you and I. While I am a big fan and excited about using bigdata to provide better service, higher intelligence smarter services I am worried also that it is an invasion of privacy or even that improper or inaccurate data will cause people problems.  My company WSO2 is trying to build secure solutions and bigdata solutions to enable companies to be intelligent.  It is an awesome responsibility to have personal information about people.   It’s not just a legal responsibility but a personal responsibility societal responsibility to make sure that everyone is treated fairly by the systems we build.

As an Open Source company WSO2 has an obligation to promote transparency and responsible use of data.   We provide our source code to everything we do.  There are no “enterprise licenses.”  I believe our advocacy of open source is a statement also about transparency.   Please let me know if you think my personal ideas about privacy above are reasonable and sound.   I feel very passionate that while a new cyber world is being built that world shouldn’t be something we fear or are hurt by.   The goal of all this new technology is to make life better.  We must find a way to build this new world so that we and our children want to live in that world and that this new world is compassionate and fair.


Follow

Get every new post delivered to your Inbox.