Lets not be pompous scientists



What we do in science: When we pursue scientific careers, we dedicate ourselves to investigation. We ask questions that are important to us like “why do people change the stiffness of their fingers when they do a tactile exploration of an object? Does the stiffness of the fingers have anything to do with information transfer? If so, can I use that principle to develop a better probe to localize cancer?” Then we design experiments and test our intuitive hypotheses, and if we see something interesting, we publish the results with evidence. What are the evidences we present? Apart from analysis of data we could measure with acceptable sensors, evidence includes mathematical derivations based on certain assumptions. Assumptions include linearity, negligible terms in nonlinear equations to bridge the formulations to what we can easily solve with known methods, and reasonable things we can do to reduce the order of differential equations.
Conditioned nature of gathering evidence: What we see through man-made sensors is limited to the limit of the accuracy, precision, sensitivity and specificity of those sensors to begin with. Ex. An accurate temperature sensor will have less noise in the readings. A precise temperature sensor will give the same reading for the same temperature across repeated measurements. Sensitivity of a temperature sensor is the smallest increment of outside temperature it can detect. That is why sensitivity of digital sensors are limited by the number of bits it uses to encode values. Specificity of a temperature sensor is the degree to which it responds only to temperature. Depending on the principle it uses, like the change of resistance of a material due to temperature, the sensor may respond to other things too, that affect the resistance of that material.
Another factor about measurements is that we can do anything with the sensors till we save the data. Ex. before collecting EMG data of muscle activity, we can clean the skin as much as possible, we can use different stickers to attach the sensor on the skin firmly, we can give instructions to the participant to do the task in a given quality, etc. However, the moment the data is saved in the computer, the data suddenly become some sacred evidence of the truth! And whatever we conclude based on those data become scientific! But in many scientists’ minds, things keep bugging their minds like “that second sensor that I think came out of the skin at some point…”. But there is no scientific evidence to prove that it came out of the skin. And we spend days using various statistical methods to prove that “that bit of data is not reflecting the truth”. Who came up with statistical theories? Well statisticians, another scientific community! Do their theories remain the same? Well no. Every year, we get a new statistical test that does a “better job”, like the Mann-Whitney U-test being better than the simple t-test for non-Gaussian distributions with two classes. But at any given time, the best known statistical test is the judge.
Why do things work in waves? Well, finally we tell the community that there is something cool in our findings in the most prestigious conferences, and then we extend the paper to a journal. Off we go with the cycle of scientific investigation. Then we watch how our bar charts in Google scholar rise with new citations. We build an ego around it, and somehow for each paper in this World, the number of citations rises, and then falls. This is true for every single paper published in the name of science in the history of human civilization. Why do the citations rise and then fall? Well because of the very nature of the causes and conditions of scientific evidence. We used sensors to collect data, remember? With the passage of time, sensor manufacturers produce better sensors that have better accuracy, precision, sensitivity, and specificity.
When I was a student, force sensors used to be so noisy. Now we have six-axis force torque sensors with pretty reasonable accuracy and little cross-talk among axises. Accelerometers used to have so much drift. When you double integrate to get the position, somebody who walked just 2-meters end up going through the walls. Now we have better body-mounted accelerometers. So, some fine day somebody else is going to check what we published with better sensors and come to the conclusion that there is more to what we said some time back. In the worse case they conclude that what we said was wrong! Then sometime later, somebody else proves that he too was wrong. This goes on an on.
It can also be that sometime later, some group of statisticians finds out that there is a more refined way to test what we tested some time back. The new statistical tool may invalidate our significance test. A good example is that one journal in social psychology has banned the use of p-values and confidence intervals as inherently flawed.
If the evidence was based on mathematical derivations, somebody else may use a more powerful computer to do simulations that would allow them to use the nonlinear terms we dropped for convenience. That may lead to a more refined finding. So, the number of citations to our paper drops, and the citations to the new publications start to rise.
What is wrong with this reality? Well there is nothing wrong? Were we wrong to publish something that would later be challenged or reduced in importance? No, absolutely not. It was because we published, somebody ventured to test it in a better way. So, what is correct to keep in mind is that the scientific evidence we present are “best possible evidence” we can provide to suggest a possible reality in nature. We are wrong if we believe that it is absolute evidence we present with a big pompous ego. All science is conditioned upon the tools we use in the process of gathering evidence. The tools are subject to change. So do the evidence that are conditioned  on the tools.
Beware of flaws in hypothesis testing: This brings us to the topic of hypothesis testing. In my life, in more than 95% of the papers I have read, the initial hypothesis was right. Does this mean that in 95% of the cases, human intuition is scientifically right? Well it could also be that people re-formulate the hypothesis when they go on seeing experimental evidence to match the politics of reviewing. So many scientific friends agree that the chances of getting a paper accepted in a journal are pretty low if the hypothesis was proven to be wrong. When I was a student, I was very puzzled by this. In one test, I had the gut feeling that steady state variability of walking is predominantly governed by the distribution of restitution and friction on the surface of the terrain. I wanted to test it. Then one of my colleagues whom I respect so deeply asked me to first break free from my belief, and think about other plausible causes and conditions like starting speed, leg lengths, surface roughness etc. Simulations of all these scenarios still suggested that what I thought was right. However, little pause to look at what is going on in my mind showed me that there was a strong inner compulsion to prove what I initially thought was right. In essence, what I was trying to do was to use scientific politics of using experimental evidence to prove what I thought was right. This leads to suffering because something deep inside keep telling that the mental models we construct about the reality of nature is not entirely compatible with what is truly right, or at least it tells us not to be too confident about what we say in our publications with so much confidence. This changed the way I planned the experiment, and I became so detached from the expected outcomes. This sense of equanimity made my life stress free, because I could accept any experimental outcome.
Another important thing to keep in mind is that any hypothesis we formulate is based on the precondition that it can be tested with measurable variables (causes and conditions). We conveniently drop what is not measurable. Does this mean that those non-measurable causes and conditions do not exist? Take a simple example. Take your favorite dish mom cooks. You can test the hypothesis that the taste depends on the proportions of ingredients she adds, or the timing of heat control in the cooker. But can you test the hypothesis that the taste also depends on the state of her mind, or love she felt for the family when she cooked the food? Well it is difficult given the current technology to measure emotion, consciousness, love, etc. Does this mean that those variables do not have a role in the set of causes and conditions that decides the taste of what mom cooks? No. It maybe that the emotional states have a significant impact on the variability of those measurable things like the decisions to choose proportions of ingredients, timing of heat control, etc. But, we conveniently get away with our limited set of testable hypotheses by saying “well, cooking is a stochastic process. The taste depends on X, Y, and Z, and there is variability in the way X, Y, Z are regulated”. What is not said is that “we really don’t know why there is variability. We didn’t dare to ask the questions that the variability may depend on non-measurable things to do with emotional factors”.
Importance of being humble about our conclusions: Therefore, I encourage my PhD students to keep this in mind and not be brainwashed to believe that scientific conclusions are absolute and what cannot be measured using a sensor does not exist in the set of causes and conditions leading to an outcome! Some people take the stand that every belief is not true till they are scientifically proven. A better stand I suggest is that any belief can be true till they are proven not right consistently across all advancements of sensing technology and mathematical tools. It does not mean that we have to believe things that are not scientifically proven, but it leaves a room in our heart to accommodate people who believe in things we do not agree based on scientific arguments. It is also important to live a life free from frustrations. One main source of frustration in a scientific life is the compulsion to defend things we thought was scientifically right. It is hard, but important to test things with a total detachment and with total awareness of limitations of our conclusions. Lets try not be attached to what we want to see at the end. Let things be as they are, and let observations fall out of the data and derivations, and know that what we see is only a fraction of the total set of causes and conditions of a phenomenon. There can be other things we cannot see through sensors and mathematical derivations. This has recently helped me to improve the quality of my life.

Comments

Popular posts from this blog

A formula to ignite youth tech start-ups in developing countries

Re visiting the foot prints

For PhD students – a personal note on mental health