In the story Alice in Wonderland, Alice encounters a cat that disappears. The cat has a long conversation with Alice and grins; slowly it starts to disappear leaving only its grin. Alice remarks

Well! I’ve often seen a cat without a grin,’ thought Alice; `but a grin without a cat! It’s the most curious thing I ever say in my life!

To Alice it is so easy to associate the grin with the cheshire cat, but someone else who looks only at the grin will not be able to understand where did this grin come from.

During school days,  my programming started with BASIC, it was introduced to me as high level languages. Curiosity and college curriculum made me go little low level and understand assembly & machine level to do some simple addition subtraction programs on it. The pain of using that made me understand the need for using a high level language. I would often write simple C programs and check the assembly generated by it to compare it with hand written assembly code and used to think that I could optimise the assembly code if I hand write and preferred low level for size & performance.

The idea of writing programs in assembly quickly died down after I learnt to write complex programs in C and Java but not able to get the same from assembly in the same amount of time, it plants a grin when we see what we wanted on the screen. I was just curious and sometimes try to see what happens behind the scenes for my C programs. Fast forward into current workplace, there was a rapid change in the way I could create a webapp; all I needed was to run a command and I get a skeleton webapp. It is like the cheshire cat grinning at me but there was no cat only the grin. I understand the grin because I have seen the cat, does it make a difference for people who have not spent time understanding at lower levels? Do people see the modern programming tools as magic or will curiosity drive us to understand what is happening behind the scenes?

 You see, wire telegraph is a kind of a very, very long cat. You pull his tail in New York and his head is meowing in Los Angeles. Do you understand this? And radio operates exactly the same way: you send signals here, they receive them there. The only difference is that there is no cat. – Albert Einstein, when asked to describe radio

One of the hardest things I face as a developer is to get the number of billable hours right in the timesheet. Traditional office work always meant to be hours present in the office to be equivalent to the number of hours worked as there was a structure and flow to the work. It is unfortunate that the knowledge workers fall into the bucket of hours of work at the workplace as a measure of billable work.

Knowledge workers have to do a good deal of home work to stay up to date. Any task at hand they pick, will involve some amount of deep thinking and application of knowledge. Thinking can happen anytime and not necessarily at workplace. Hypnagogia has provided me some solutions for some pressing problem, it is also famous for discovering Benzene ring structure. If a scientist can benefit (eventually monetary) from that kind of discovery, then why not we bill the time we spend in thinking about the problem in hand while waiting at the traffic signal lights or having a shower.

In the book Pragmatic thinking and learning, the author mentions about L-mode and R-mode of the brain; there is an example of that here. What we usually end up billing as a knowledge worker is what is done by L-mode, but the many of the inputs comes from the R-mode of the brain. The bias of billing for the L-mode makes people spend a good deal of time with tools rather than thinking about the solution and constantly striving to update themselves. It leads into a vicious cycle of working too long without success if the task at hand requires deep knowledge and application, which leads to more hours billed without any work done.

We should look at work as a whole outcome than measuring it in terms of man hours or lines of code. In that way it provides the individuals the freedom to plan their day and deliver effectively at their job.

Image: FreeDigitalPhotos.net

Nat Geo’s air crash investigation series gives a good deal of idea behind what went wrong behind an air crash or failure. I was watching one of the episodes which was about the rudder malfunction of Northwest Airlines Flight 85 and how the pilots kept such a large plane in control and landed it safely. All the pilots in that plane continued to fly the plane by adjusting the engine thrust and ailerons to fly to the nearest airport and land safely.

As per the captain’s account it was a very tough thing to fly such a large plane manually. The captain is also a flight instructor so he had a sound understanding of the dynamics of the plane. When interviewed he said “Learning to fly manually is an art, sadly that is a dying art“. Increasingly planes are being designed and manufactured to fly themselves most of the times. This makes the lives of the pilots so easy that most part of the journey is assisted by the machines. But when the machines fail or not designed to handle failures, can humans take over?

What happens to other professions where there is an ever increasing assistance provided by the computers and machines? The result must be the same; when something goes terribly wrong; then there will be a disconnect between the man and the machine. From that point onwards it will be just trial and error handling if there is no deeper understanding of the system.

I read this article on leaky abstractions longtime ago. The author states that if something non trivial is abstracted then the abstraction will be leaky and put us in a spot. This article also prompted me to dive deeper into something if I am learning new. It is really tempting to hang on to the hello world examples & the new hello world of GUI “To do lists” and get away with a feeling that I have got a fair exposure. If I just learn that but not spend enough time to understand the internals, then I would be in a bad spot someday when I least expect.

Image: bk images / FreeDigitalPhotos.net