Just a little more research discovered the following as well:
Continuous math is a problem as it doesn’t seem it could possibly correspond to reality
I still have an enormous problem with the fact that 16 or 32 “Parameters” exist in Physics to describe our reality each of which appears to be fine tuned.
I suspect that in determining the probability of life as we know it to exist there are far more conincidences and incredible things that had to happen. The fact is the more we learn about life and the universe the more unlikely our existence seems
The idea of gravitational collapse is interesting
What about “knowing and not-knowing theory”?
How does the existence of a working quantum computer or not affect things
Is there a theory of infinite complexity?
How does computability, completeness, levels of infinity fit into all this
how does mathematics relate to reality.
Is it formulaic mathematics or algorithmic mathematics that maybe describes a universe?
Could quantum physics be an algorithm not a formula?
What is time? A dimension? An algorithmic step?
Is there an experiment to figure out if we live in an algebraic or algorithmic universe?
Xenos paradox complements this
space and time must be quantized
renormalization related to trying to treat spacetime as continuous
how can spin and other quantized quantities change with only certain values
Does the ability to represent distance based on guage mean distance is irrelevant? Is locality a issue if you assume scale is irrelevant.
Genetics bothers me for several reasons:
How many chemicals does the body process that are raw vs are produced by the body?
How does the body/cell regulate the quantities of various chemicals to maintain homeostasis?
How does the body / cell decide which genes to activate in which quantity when?
What are the subsegments of genes that form the machines and how many of these are there? Do the machines do different things? Wildly different things?
How do multi-cellular functions and interactions occur and how are those programmed?
What are the number of possible input variations and ways in which the body has to react?
How does the brain interact with the basic body processes?
What things are programmed into the brain?
How many different organs, cell types, different configurations of basic building parts are there?
If you add these all up the amount of information that has to be encoded in the genetic code is impossible it seems to me.
Can we figure out how many genes vs switches and controls there are?
Are some of the genes producing protien machines that can act in complex ways, i.e. measuring something and activating or doing something else
Can we figure out a numeric way to quantize the complexity of the body operation or amount of programming required to run a human body or build one
What is the purpose of the microtubules?
Are there microtubles in the DNA?
How could memory be stored in quantum universe?
How could pattern matching be done by a quantum system?
this is an amazing creature that rebuilds its DNA and merges with other organisms
A minimal AI system:
It must be of a certain size to percieve a domain of a certain size: a 40 bit system can’t understand a billion bits. There must be some information limit to the size of brain capacity to understand something of a certain complexity.
There must be robust set of inputs that provide lots of data to the system
There must be a robust ability to interact with the environment so the system can cause action and then see results to validate generalizations made
The system requires a powerful pattern matching scheme
The system requires a powerful generalization mechanism and ability to correct bad generalizations, unlearn them
The system requires that generalizations be made close to the source and then processed automatically so that only the generalizations pass up or that the generalizations with exceptions are passed up (this is like x but differs in y and z)
The system requires a lot of memory of sequences of generalizations and specific data that can be recalled
The system requires being able to link different generalizations with other generalizations and other inputs and these can be used to link together and form other generalizations
The system requires a motivation to do anything and to self-correct
The system may require active teaching because it may require lessons planned in advance and ways of testing if it is producing good results to facilitate higher learning
The problems to try to explain:
dreams that seem planned in advance
how ideas are formed in dreams to solve problems in real life
how “aha” moments happen
How can the brain learn to do things like complex physical activities that seem to take small fractions of a second and integration of various inputs be processed in milliseconds in coordination with physical activity that is fine tuned
Does the body itself learn without the brain?
What kinds of metrics can be applied to understand the scope of the calculation, the pattern matching required to do an activity x
This is the same mistake that was made in early AI research. You like the early pioneers in the field mistook simple programming algorithms that made a computer look smart to an actual human type intelligence. Marvin Minsky laid bare this falsity in early 80s and the field collapsed for decades.
A human does not examine the cloud (his brain) for sequences of data and produce a result as a computation. You are mistaking being able to produce fast computers and smart algorithms for actual learning. A human is a general purpose learning machine. It starts with nothing and understands everything around it through the input from its senses. It forms the questions and deduces generalities across a wide spectrum of input sources. These patterns the brain recognizes go up dozens and dozens of levels of conceptualization that so far are beyond and obvious algorithmic understanding. We have never been able to decipher the precise process of learning. We simulate learning by rote algorithmic processes in computers that have the limitations that they are always based on “OUR” preconception of the world and how the process should work. No mechanism is understood that could do this generally.
Numerous scientists and computer genius’s have tried to form neuronal network systems but so far as I am aware they are good at doing things like recognizing borders of a box or detecting patterns in data that we are looking it to recognize but no generalized framework exists for that pattern recognition to somehow grow to be something that is persistant and growing and multi-level or stretching across multiple senses, multiple categories of types of learning at the same time and then generalizing from there.
Maybe it is a scale problem. Maybe it is just a matter of trying some neural network program across billions of simulated neurons and letting the thing run for years. Also, we would have to give it robust input source with millions of data points every second to process. Maybe then such a neural system would “learn” and show the kind of growth that we see with humans but part of the problem is that you can’t just be an observer in the world. It is not clear that learning can occur by simply observing. Interaction with consequences seems to be a critical part of learning or even in humans we get lots of bad learned concepts and wrong generalizations. Similarly there must be a feedback system in any neural system where the neural net must be able to control things to affect its input so it can see the effect of its output and then it must have a motivation for such action. I think it is possible that no learning machine can be built that doesn’t have a minimum of a lerning matrix, a set of robust inputs, a set of robust outputs, a motivation system. It is also possible there must be some “teacher” element for intelligence to go beyond a certain level by itself. A teacher serves the purpose of guiding the development by setting up scenarios and providing feedback where the environment might not provide it.
Another thing has troubled me about the whole issue of learning, cognition, intelligence. It is not clear how “aha” moments happen. Sometimes the brain makes “leaps” of cognition where it pieces together an unbelievable number of past inputs and directs itself to find the “answer” by “thinking” and the process somehow has moments where “ideas” pop in to the brain. These ideas are beyond our understanding. It requires a consciousness in which there is directed thought that percieves the context and without thinking conciously about things somehow produces an answer out of apparently nothing. Sometimes this happens in a dream where conscious thought appears absent. Yet possibilities are enumerated and eliminated without consciously doing so. This could be a form of pattern recognition but it happens without conscious thought.
Another example of this is in dreams I sometimes find that the dreams demonstrate dramatic examples of having been pre-planned. Things in the dreams happen earlier that later turn out to be discovered in the dream to be essential to the later events in a way that would have taken a lot of thinking to plan in advance. I have sometimes written computer code seeming to know in advance how many lines a certain amount of code will take which required a substantial amount of thinking in advance which I was not consciously capable of. This happens with proofs and numerous thinking exercises where the brain seems to operate below conscious thought producing the result without obvious computation. One could say this is simply the machinery of our pattern matching algorithms but if so the sophistication and complexity of this is staggering. It is hard to imagine how to replicate this with any algorithmic process.
What you have described above is learning where the domain of learning is known in advance. We set the algorithm of how to operate in advance. The human computer can take any form of input it seems conceptual or physical and process it to produce new conceptual models. It could be a game or trying to understand the universe, probing abstract math, designing complex computer systems. We are so far from having computers able to even start to start to start on problems like this. I think you trivialize the brain.
As an example of how little we know there is still after 50 years of searching found the basic means the memory. We have several possible places memory could be stored. We have several ways it could be done. The fact we cannot even locate where the data is stored after 50 years is perplexing and surprising. We have a LONG way to go. I realize our ability to do this is growing exponentially but our progress is zilch in the face of the amazing growth of our knowledge of nature, our tools and understanding. We have tried to build “smart” machines for 50 years but the best we can do is have smarter programs which know how to process information and algorithms faster. Our algorithms are faster, better, but the basic problem of learning is completely wrong. The way we learn to recognize patterns in faces, speech, etc.. are totally different than the way humans apparently do it and these things are frequently good but when they make mistakes the mistakes are awful and stupid. Humans rarely make those mistakes.
The funny thing is I was very depressed in early in life thinking we would build smart computers. Although it was my passion to want to do it the thought of smart computers scared me a little and made me feel kind of like it might be dangerous or worry about a lot of big questions. The lack of progress allowed me to forget those negative thoughts. It’s become apparent this is WAY harder than we thought 40 years ago.
Whatever the result of all the study of the numbers today and the way in which calculations are done, what errors were made the thing you are all missing is that the fundamental ongoing issue is data quality! Assuming we debate and eventually conclude what the correct methodology for handling the data are in terms of computing averages, etc the fact remains that every day as new data are entered and things change (however those changes may come about for whatever reasons) if you are depending on those numbers for serious work you need to have tools to insure data quality.
What does that mean? It means that the NOAA and other reporting agencies should add new statistics and tools when they report their data. They should tell us things like:
a) number of infilled data points and changes in infilled data points
b) percentage of infilled vs real data
c) changes in averages because of infilling
d) areas where adjustments have resulted in significant changes
e) areas where there are significant number of anomalous readings
f) measures of the number of anomalous readings reported
g) correlation of news stories to reported results in specific regions
h) the average size of corrections and direction
i) the number of various kinds of adjustments, comparison of these numbers from pervious periods.
What I am saying has to do with this constant doubt that plagues me and others that the data is either being manipulated purposely or accidentally too frequently. We need to know this but the agency itself NEEDS to know this because how can they be certain of their results without such data? They could be fooling themselves. There could be a mole in the organization futzing with data or doing mischief. Even if they don’t believe there is anything wrong and everything is perfect they should do this because they continue to have suspicion of their data by outside folks who doubt them.
This is standard procedure in the financial industry where data means money. If we see a number that jumps by a higher percentage than expected we have automated and manual ways of checking. We will check news stories to see if the data makes sense. We can cross correlate data with other data to see if it makes sense. Maybe this data is not worth billions of dollars but if these agencies want to look clean and put some semblance of transparency into this so they can be removed from the debate (which I hope they would) then they should institute data quality procedures like I’ve described.
Further of course we need to have a full vetting of all the methods they use for adjusting data so that everyone understands the methods and parameters used and can analyze, debate the efficacy of these methods. The data quality data can then insure those methods appear to be being applied correctly. Then the debate can move on from all of this constant doubt.
As someone has pointed out if the amount of adjustment is large either in magnitude or number of adjustments that reduces the confidence in the data. Calculated data CANNOT improve the quality of the data or its accuracy. If the amount of raw data declines then the certainty declines all else being the same. The point is that knowing the amount of adjustments, the number of adjustments helps to define the certainty of the results. If 30% of the data is calculated then that is a serious problem. If the magnitude of the adjustments is on the order of magnitude of the total variation that is a problem. We need to understand what the accuracy of the adjustments we are making is too. We need statistical validation continuing (not just once but over time continuing proof that our adjustments are making sense and accurate).
In academia we have people to validate papers and there is rigor applied to an extent for a particular paper for some time on a static paper. However, when you are in business applying something repeatedly, where data is coming in continuously where we have to depend on things working we have learned that what works and seems good in academia may be insufficient. I have seen egregious errors by these agencies over the years. I don;t think they can take many more hits to their credibility.
I’ve read 2 books recently on the topic of life the universe and everything. One is called Biocentrism by Robert Lanza and the other is called Our Mathematical Universe by Max Tegmark. Both are flawed but both made me think more deeply about the problem and while I can’t offer any new physics I have made some observations.
The Collapse of the Classical View
In the early 20th century the fundamental change that essentially de-virginized us (excuse the analogy but it is actually appropriate) occurred with the unbelievable result from the double slit experiment. This fundamental inexplicable result which has baffled scientists to this day holds the complete collapse of the classical deterministic view of the world. I’m not the only one. Several physicists have called this experiment the fundamental experiment that exposes the quantum wierdness that essentially turned physics from a scholarly straightforward pursuit of linear reasoning to a mind bending that has resulted in ever more and more bizarre experiments and results that produce ever more bizarre and unbelievable theories.
Please note. I am not criticizing physicists here for doing all this. I have no better explanation than they do for what we are seeing but the fact is that experiment revealed that nature was far more complex and baffling than we ever imagined and we have had to construct ever more bizarre theories to explain what we see as we do experiment after experiment.
Scientists with a straight face will try to tell you the world consists of the following facts:
1) 94 % of the universe is composed of dark matter and dark energy which we don’t actually have any understanding of. We don’t know what these things are, have never seen them and yet they are pervasive everywhere and are filling space all around us. Yet we don’t see them, have no idea what they are. We need dark matter fundamentally because calculation after calculation has shown that galaxies would fly apart without the addition of 5 times as much matter as all the visible matter. Somehow invisible and around us giving this necessary boost to keep galaxies from exploding is “invisible” matter that is 5 times more than what we see. Okay, so there are ghosts flying around us all the time but don’t worry, since it doesn’t interact with us it is there, trust us.
2) However, dark matter is still a minority of the energy in the universe. According to our new understanding we need dark energy because simultaneously with the huge amount of attractive dark matter that keeps our galaxies together there is a repulsive energy that is pushing the galaxies and everything in the universe apart. If we do not accept dark matter and dark energy we have no other theory or even conceptual theory plausible that could account for the observations of galaxies behavior and the undeniable fact the universe is to our surprise and mystification actually flying apart.
The data behind these observations is essentially indisputable. It has been observed in countless experiments now that the universe is expanding faster and faster and observations of galaxies clearly shows the existence of matter we cannot account for that somehow appears to be hidden from us. Other theories have so far not been able to work that have tried to explain these phenomenon any other way.
3) That in the first 10^-30th second of the universes existence a force called inflation caused an expansion of the universe by a factor in excesss of 10^100 times in size in less than 10^-6 seconds. After this unbelievable sudden explosion of the universe into existence the expansion stopped. Various theories purport to explain this inflation but the fact inflation occurred is bizarre. It seems so contrived and convenient that this explosion happened to enable our universe.
4) That this inflation is so big that large parts of our universe today is there and we can never see it. The universe is so large now that we assume from calculations that identical copies of the earth must exist with human beings on it like you and me every 10^…. so many light years in all directions and therefore there are virtually an infinite number of copies of you and me living lives. This is called the level 1 multi-verse.
5) Quantum mechanics tells us that the most likely explanation for the bizarre results observed is that the world lives in what is called superposition with other worlds, a real virtual infinite number of parallel worlds in which all possible outcomes of all possible quantum states exists. This is the level2 multi-verse in which there are an infinite number of copies of you and me living all possible combinations of lives.
6) There are 32 constants that physicists have found that cannot be tied to any other quantity by necessity. Things such as the ratio of the mass of the electron to the proton, the speed of light, the ratio of dark matter to regular matter, the plank constant, the strength of the strong force constant and so on. These constants appear to be randomly selected. In fact one book says a statistical analysis has been made and they are to within a significant degree perfectly random it appears. However, they are not random. These constants turn out to be incredibly brittle constants. The slightest change in any one of them would make life as we know it in our universe impossible. Maybe life is possible with the constants slightly modified or even largely modified but we know that with only a change of less than 1 in a million in the strong force constant we would not have solar systems like we see today composed of carbon and heavy atoms. Stars would fail to produce these materials on their explosions. If the ratio of dark matter to regular matter were changed by even the smallest amount the entire universe would have imploded or exploded outward in such a way that no solar systems could have formed or the universe would have lasted no time at all practically. We have 3 dimensions of physical space and one dimension of time. We’ve known for centuries that you cannot form stable orbits in anything other than 3 dimensions. All other dimensionality would result in no planets, no orbits, no stars. Each of these constants appears to have been tuned to produce the universe we are in and yet we have no explanation for why these constants are what they are. The chances these constants would arise at random to be what they are is < 1 in 10^500 according to one book. So, the fact that these constants are selected the way they are points to an almost irrefutable result. Either there is an explanation for why these constants are as they are because they do in fact have some law that forces them to be what they are or there must be at least 10^500 universes with all these possibilities existing in them and we are simply luckily in the one that humans can live and think.
7) We are to believe that quantum strangeness is so bizarre that it appears that doing things in the future has the effect of making things you do now different. While this is confusing, the result is still one of these bizarre beyond bizarre ideas. Since entangled particles have to obey certain properties depending on what we know about their behavior we can only live in universes where these things work out so that what we do in the past corresponds to what we do in the future and this entanglement forces us to be unable to do some things or see some things that we should be able to do but we cannot do them because of things we do or don’t do in the future. This is confusing but it doesn’t violate causality. It is simply that some possible sequences of actions that we thing should be doable aren’t. Universes exist in which we do only a select combination of actions but not all actions are possible independently.
8) Because we have no good reason to believe that the laws of physics are unique we probably have to accept the notion of a 4rth level of multiverse in which all possible consistent laws of physics are possible. This is of course the biggest multiverse of all.
Let me recount the truly staggering state of current results from experiments we have found:
1) Galaxies should be flying apart, so we need something called dark matter which we haven’t seen
2) The universe is expanding very fast and the only explanation we have is something called dark energy which itself is more than 10 times the energy of all matter in the universe and also hasn’t been seen.
3) We discovered a massive inflation occurred in the early universe where bizarrely the universe expanded by more than 10^100 in < 10^-10 seconds and then stopped.
4) The universe is so large now that it is virtually infinite in size not limited to the 10^13 billion light years across we thought just a few years ago. This gives us level 1 multiverses.
5) The bizarreness of quantum physics forces us to a worldview that says that multiple infinite universes exist in superposition at any time with all possible quantum states elaborated resulting from all possible previous quantum states and ad infinitum.
6) That 32 constants have been precisely picked that result in the universe being the way it is and that these constants are all extremely brittle and our universe would collapse or be inhospitable to life as we know it with any of them changed by even a very very small amount. The probability that these constants would result at random appears to be 1 in 10^500 which means essentially there must be a god or there are infinite universes that have other inhospitable constant values in them.
7) That time causality is more complicated than it would appear and things we think should be doable aren’t that take place at different times and places.
8) that there are probably an infinite number of universes with different physical laws possible
I will conclude this blog here and continue with where I;ve gone with some of these things and other strangenesses and bizarre things we are to believe.
I believe we need to have new laws to deal with the accuracy of information being held about people and the duration that data can be held. For instance, no company should need information about you for more than 3 years duration without your explicit permission, not permission in a 20 page “legal disclosure” but explicitly that you are okay with someone keeping data longer than that in a separate acknowledgement. If you are under 21 it should be 1 year limit by law. Any data kept after 3 years (1 year if under 21) must be kept in such a way that you can dispute it and find out who has such data by consulting a central registry. Disputes of the data should be resolved to the benefit of the consumer unless the holder of such data wants to fight and prove the legitimacy of such data.
Every company I talk to is accumulating vast information about you and I. While I am a big fan and excited about using bigdata to provide better service, higher intelligence smarter services I am worried also that it is an invasion of privacy or even that improper or inaccurate data will cause people problems. My company WSO2 is trying to build secure solutions and bigdata solutions to enable companies to be intelligent. It is an awesome responsibility to have personal information about people. It’s not just a legal responsibility but a personal responsibility societal responsibility to make sure that everyone is treated fairly by the systems we build.
As an Open Source company WSO2 has an obligation to promote transparency and responsible use of data. We provide our source code to everything we do. There are no “enterprise licenses.” I believe our advocacy of open source is a statement also about transparency. Please let me know if you think my personal ideas about privacy above are reasonable and sound. I feel very passionate that while a new cyber world is being built that world shouldn’t be something we fear or are hurt by. The goal of all this new technology is to make life better. We must find a way to build this new world so that we and our children want to live in that world and that this new world is compassionate and fair.