Saturday, December 07, 2013

Better algorithms v/s micro-optimization

As a kid, I participated in a game that involved bouncing a tennis ball on the ground using your hands. The winner was the one who was able to bounce the ball the most number of times in 60 second.

All the little ones (including me) went first (since the game was primarily meant to keep us busy). I was top scoring with a little over 70 bounces in 60 second. What I did to get there was try and go as fast as I possibly could without losing control of the ball, and focusing heavily on where the ball was at all times.

Then the grown-ups started and for a while no one was able to beat that score. Then one smart dude knelt down when the signal to start was given. Everyone else had already started bouncing there balls and were in to their second bounce, and this guy was taking his own time getting settled in his squatting position. When he was ready, he started bouncing the ball, and boy did he go fast! He had just out-smarted everyone else with a better algorithm for getting more bounces in the same time duration.

Better algorithms are like bicycles for the mind.

Before we had sorting algorithms that ran efficiently [O(n log n)], we had micro-optimizations applied to every known O(n2) sorting algorithm in an attempt to make it perform fewer comparisons, or exit early, and hence run faster. Fixing the algorithm, however, was the real game changer.

Saturday, October 26, 2013

Javascript as a language is really darn good

If you look at computer languages as tools to help you convert thought to code (ignoring other real-world practical considerations such as efficiency, developer availability, platform pervasiveness, library availability, etc...) then my opinion is that Javascript (in the form of node.js as we know it today) really stands out. Let me explain why I feel so.
  1. Javascript is a small and minimal language with few (redundant) constructs and can be taught and remembered easily. This is useful for someone who is trying to use javascript to write code as well as someone who is trying to read javascript code. When you are expressing your ideas, you want to know what tools you have and want to be able to fit them in your head. At the same time, when you (or someone else) (re)visit(s) your code, you want the reverse transfer of information (i.e. code to idea) to be quick and unambiguous and the reader shouldn't get confused trying to figure out the language syntax and semantics. To explain,

    • The fact that functions are first-class objects and that the lambda syntax isn't different (or otherwise special) compared to the standard function definition syntax helps.
    • As does the fact that objects/maps (hashes), arrays, etc... all have the same get/set syntax.

  2. There's very little fluff in the language. Most keywords don't feel like they're useless. The only ones I find excessive are 'new', 'prototype', and 'function'. Coffeescript solves this to some extent.
What this means is that it's very easy to get started with your first program once you have an idea of what it is that you want to express in code. Javascript lends itself well to elegant code.

Since there is no main() function in a node.js script, you don't need to encode all your executable code in a main() function. You won't believe how incredibly useful I find this even though I have programmed in C and am used to that style of programming. This also means that each file can have its own test() function that is run by just invoking:
$ node file_name.js.

Declaring arrays and objects (maps) in javascript is really easy, and anything that can be reasonable done in 1 line is done in one line. For example, declaring an array a that has 4 elements, 44, 31, 21, and 23 is:
var a = [ 44, 31, 21, 23 ];

Declaring an object o that maps key 'name' to 'blogger', key 'url' to 'http://blogger.com/', and key 'age' to 5 is as simple as:
var o = {
  'name': 'blogger',
  'url': 'http://blogger.com/',
  'age': 5
};

Notice how values in maps can be of different types. This is also true of elements in an array.

Converting a string (or any other type) to a string isn't rocket science either.
var sint = String(89);

To test my belief, I've started writing a toy regular expression parser and evaluation engine in javascript that will have some intentional holes (un-impelemented features) that you can fill up later so that you can get a feel for both how regular expressions work, as well as how javascript as a language handles.

Sunday, April 21, 2013

Inside the guts of Kadane's algorithm OR Understanding the Maximum Sum Subarray Problem

Kadane's algorithm is used to solve the 1 dimensional Maximum Sum Sub-array Problem in computer science.

Let's try to understand how it really works. If we are given the problem of finding the maximum sum sub-array of a given array, the first native approach we can try is the O(n2) algorithm of starting at every array index and computing the sum from that index to every index after it. This works, and gives us the correct answer, but we should ask ourselves if we can exploit certain properties that the problem might have to try and speed up the solution.

Let's try and dig deeper into an example.

Consider the following array & its corresponding cumulative sum subarray:

Element10-5-271-5-324-36-215-21
Cumulative Sum105310116359612-9-4-6-5

Some observations:
  1. A maximum sum sub-array can never have a negative number as one of the elements at the end-points, except of course if every element in the array is a negative number. If a maximum sum sub-array had a negative number on one of its end-points, we could remove that element and increase the value of the sum, thus getting a sub-array with a larger sum. Conversely, a maximum sum sub-array always starts and ends with a non-negative number, unless of course all the numbers in the array are negative.

  2. We can always clump all negative and non-negative numbers together since once we encounter a negative number, the sum always drops and once we encounter a non-negative number, the sum never drops.

  3. If the running sum of a sub-array array ever falls below zero, no solution will ever include the negative number that caused this sum to fall below zero since it over-powers the positive sum accumulated before it. Note: We only speak of 1 negative number because of the clumping point above.

  4. An extension of the point above implies that the new running sum that begins once a cumulative sum falls below zero always starts from the immediately following non-negative number.
Applying the rules above, we get the following recurrence:

MaxSum[i] is the Maximum Sum ending at index i.

MaxSum[i] = Array[i]                if i = 0
MaxSum[i] = MaxSum[i-1] + Array[i]  if MaxSum[i-1] + Array[i] >= 0
MaxSum[i] = 0                       if MaxSum[i-1] + Array[i] < 0

The re-written array (which now consists strictly of alternating negative and non-negative numbers) and the new cumulative sum sub-array are (the row "Cumulative Sum" below represents the MaxSum variable above):

Index1234567891011
Element10-78-86-36-215-21
Cumulative Sum10311396120534

Sunday, March 17, 2013

How to give a talk/presentation - Wisdom from Dr. Bender

(The content of this post is mostly stolen inspired by this post by my friend Deepak.)

This Spring, I took a course called CSE638 (Advanced Algorithms) under Professor Michael Bender. This was a mostly research oriented course, with a focus on doctoral students. An important part of doing Academic Research is talking/presenting your material in conferences and seminars. Professor Bender spent a couple of classes discussing this, and I thought I'll list down the stuff he mentioned - more of a note to myself.
  1. Make the talk prefix-competitive
    If somebody dozes off, the person should have got the best part of your presentation/content before he/she dozed off. Bring the best part of your talk to the front. Make it prefix-competitive.
  2. Stand in front of the screen, Smile :)
    Do not stand away from the screen, in some corner behind the podium. Stand in front of the screen. Let the Projector's display glow on your face. Stand confidently in front of the screen - okay now, don't block the screen, but stand to the edge of the screen, and Smile while you speak. Convey enthusiasm. "People like seeing faces". Facebook has become popular for a reason ;)
  3. Diagrams and Figures!
    Have plenty of diagrams and figures on each slide.
  4. Refer to every slide & everything on each slide
    Refer to each and every item on your slide. If there is a diagram, explain everything. If you're not going to explain something, refer to it and say that you're not going to explain it. Either way, please do refer to everything on your slides!
  5. Touch the screen, point to it
    Don't be afraid of the screen. Touch it. While explaining the diagrams and figures, stand over the screen and explain stuff by physically touching the screen with your fingers/hands. That very act conveys confidence.
  6. Results Up Front
    If your talk is about the results, let the results be as far upfront as possible. Build the context as soon as possible, and announce what you did.
  7. Explain the Title
    Spend time explaining the title. And the background. This might seem to conflict with point-4, but it's not. That's the trick.
  8. Give credit wherever possible, and do this in the beginning
    When giving credit to others, please try to do this when you start rather than when you end. People like hearing about other people. Did I mention something about facebook earlier?
  9. Explain why the problem is important
    This I feel is one of the biggest take-aways from the talk. Even if the audience doesn't understand the solution, they should understand why we need a solution in the first place.
  10. Make use of plots effectively
    Explain the axes, and know what to plot. Refer to this post by Gaurav to know more about what this means.
  11. Know your audience
    For example, presenting the workings of a toaster to a homemaker is different from presenting it to an electrical engineer. You'll need to motivate the problem differently in both cases. Same goes with presenting it to someone who has had a toast before v/s someone who hasn't. I find that Dr. Dan Gusfield does a brilliant job of motivating problems before presenting the solutions (related to point-7).
  12. There are a couple more about Jokes, Color Schemes, etc. which I can't recall.
There now, nobody can stop you from giving an awesome talk!

PS - If you're from Stony Brook, and find that I've missed something, feel free to write in the comments. Thanks!

Update: Found this article online.

Thursday, March 14, 2013

PMA: The Packed Memory Array

Thanks to Deepak & Gaurav for proof-reading this post and providing useful feedback!

The Packed Memory Array (pdf) is an interesting structure to say the least, since it keeps its elements almost contiguously (and in sorted order) in memory with no additional indirection via pointers like a usual array and still allows one to perform fast - O(log2N) - updates to the array.

We know that inserting elements in sorted order into an array without keeping gaps between consecutive elements costs O(n) per insert, whereas searching for an element can be done using binary search with a cost of O(log n). These are tight upper bounds, but the story is a little different for randomly arranged data. If one is inserting random data into a sorted array with gaps being maintained between consecutive elements, the expected time to insert a single element magically falls to O(log n)! Now, what just happened here? To find out, read more in Bender, Colton, and Mosteiro's paper titled Insertion Sort is O(n log n) (pdf). On the other hand, if we don't permit gaps between elements, even for random data being inserted, the amortized cost for inserting n elements into an array in sorted order is O(n2) - why? (hint: Refer to the expected case analysis of quick-sort).

The simple idea is to not pack all elements together, but to maintain some gap between consecutive elements. We shall see that if we follow this simple idea, then the cost for insertion falls to O(log2n) amortized worst-case. This is the packed-memory-array (PMA). We however need to formalize the idea a bit and set some rules of the game before we get ahead of ourselves.

We'll start off by assuming that we already have a PMA that holds n valid elements. One of the invariants we have for the PMA is that it should be more than 0.25x full (this is called the fullness threshold). i.e. If the PMA has space for 4n elements, then there should be at least n actual elements in the PMA. Any less and we should re-size the PMA to have space for 2n (not n) elements (this is also part of the fullness threshold). The reason we maintain extra space in the PMA is so that we can re-balance and that re-balances involving a lot of elements won't happen too frequently.

Let's just focus on insertions for now. The PMA is organized as a contiguous array of slots which might be used or free. Conceptually, we break this array of size N into N/log N blocks, with each block holding log N elements. We'll see why this is helpful. If we look at a PMA as being made up of such blocks of size log N each, then we can view the PMA as binary tree (conceptually) with each level having different fullness thresholds.

The algorithm for inserting elements relies heavily on the upper density threshold whereas the algorithm for deleting elements relies heavily on the lower density thresholds. For the sake of brevity, I shall only discuss insertion (not deletion).

Algorithm: When we insert elements into the PMA, we follow these steps:
  1. Locate the position to insert the element into. We either know this before-hand or we perform a binary search which costs O(log2N).
  2. If the cell we want to insert into is free, we just add the element, and mark the cell as used. We are done!
  3. If however, the cell is used, we compute the upper density threshold for the smallest block (of size log N) that the desired cell falls within, and check if the upper density threshold would be violated. If we notice that there is no violation, we just re-balance all elements including the new one into that block. We are done. If we violate the upper density threshold, we consider a block twice as large (which includes the cell we will be inserting into) and check if the density threshold is violated. We repeatedly move up till we find a chunk for which the upper density threshold is not violated.
  4. If we fail to find such a chunk, we just allocate an array twice as large and neatly copy all the existing elements into the new array with constant sized gaps between elements!
Analysis: We analyze the cost to insert an element into the PMA.

Pre-requisite: Weight balance for fun and profit
  1. The upper (and lower) density thresholds are arranged so that they grow arithmetically from the top (root) level to the bottom (leaf) level.
  2. The difference in density thresholds is 0.5, and we have log N levels, so if we want to maintain a constant arithmetic difference between levels, there must be a difference of 0.5/log N between each level. This is a difference of Δ = O(1/log N) between each level.
  3. Q. What is number of elements that should be inserted at a certain level to bring it out of balance?
    A. Clearly, if a level has room for N elements, and it is out of balance then that could have happened only if it went from being in balance to now out of balance, which means that O(ΔN) elements were inserted into this level.
  4. Q. What is the number of element moves we need to bring a level back into balance?
    A. If a level is out of balance, we typically go up till a level within density thresholds is located and re-balance it. Ideally, going up one level should do the trick, so to re-balance a level containing N elements, Θ(N) operations are sufficient.
  5. Therefore, the amortized cost to re-balance a level is O(N / ΔN) = O(log N).
  6. However, we must not forget that an element insertion affects the thresholds of O(log N) levels, which means that the actual cost for insertion is O(log2N).
You can also read about the analysis in section-4 of another paper.

Q1. What if we use space proportional to Θ(Nc) (assume c=2) to store N elements? What happens to the running time for insert?

A1. Well, it just goes over the roof since you're now going to be moving elements across a lot of unused cells while you re-balance the array. Additionally, you'll also need to to adjust your level density thresholds to not be arithmetically increasing, but geometrically increasing. Instead, if you use tagging and maintain elements as tags and pointers to actual values, you can get better running times if the tag space is polynomial (Nc) in the number of elements in the structure.

Q2. Is the PMA a cache-oblivious data structure?

A2. The PMA is Cache Oblivious, and is used as a building block in other more complex external memory data structures such as the Cache-Oblivious B-Tree.

Implementation: You can find a sample implementation of the PMA here.

Friday, March 08, 2013

Weight Balance for fun and profit

According to Dr. Bender, Weight Balance (different from the wikipedia article on weight-balanced tree) is one of the most important topics in Computer Science, and we were lucky enough to learn it from him! Others might not be so lucky, so let me try and do a job that's hopefully a constant factor (or fraction) as good - since it won't matter because all we care about are the asymptotics.

Pre-requisite: This note on Amortized Analysis.

We'll use weight balance to implement a dictionary structure and examine how the guts of one such structure, the weight-balanced tree work.

A dictionary data structure is one that supports the following dictionary data structure operations:
  1. Insert
  2. Delete
  3. Find
  4. Predecessor
  5. Successor
Points 4 & 5 are what distinguish a dictionary data structure from a keyed data structure such as a Hash Table.

Now, you might have heard of the following data structures that (efficiently) support the operations mentioned above:
  1. Treaps
  2. Skip-Lists
It would surprise (or maybe not) you to know that both these structures work on the principle (guess) of weight-balance!!



So what exactly do we mean when we talk about the weight of a sub-tree in a BST? Well, as it turns out, the weight of the sub-tree in a BST is just the count of the number of nodes in the sub-tree rooted at that node (including the node itself).

For example, the following tree (image courtesy wikipedia) has a weight of 9
and the sub-tree rooted at node 10 has a weight of 3.

A weight-balanced tree rooted at node u is one in which (either):
  1. The weights of the left and right children of a sub-tree are within constant factors of each other:
    weight(Left-Child(u)) + 1 = Θ(weight(Right-Child(u) + 1)
    Note that the +1 is important for pedantic reasons as far as the order-notation is concerned.
    OR
  2. The weights of the left and right children of a sub-tree are within constant factors of the weight of the complete sub-tree
    weight(Left-Child(u)) + 1 = Θ(weight(u) + 1) AND
    weight(Right-Child(u)) + 1 = Θ(weight(u) + 1)
It turns out that both these definitions are equivalent.

More realistically, if we stick to the second definition, we have:
weight(Child(u)) + 1 ≥ 0.25 * (weight(u) + 1) AND
weight(Child(u)) + 1 ≤ 0.75 * (weight(u) + 1)

where, Child(u) denotes both the left & right child of u.

For example, if we consider the following example tree, which is clearly out of weight-balance (don't ask me how we got there because this example is made-up), we re-balance it to be perfectly in balance (if we have an odd number of nodes or almost perfectly balanced otherwise).


We should be careful about how we re-balance these N nodes, because if the cost is any worse than Θ(N), then we won't get the update costs that we desire. The easiest way to perform the re-balance with a cost of Θ(N) is to perform an in-order traversal of the subtree rooted at node u, and write out the sorted nodes to an array. We can then re-create a perfectly balanced BST from that array either using recursion or the Day–Stout–Warren algorithm.

This is where the fun starts!!

Q 1. How many nodes need to be inserted under a sub-tree rooted at node v to bring the sub-tree out of balance (assuming it is perfectly balanced to start off with)? Let's assume that the sub-tree originally contains N nodes.
A. You need to insert some constant fraction of the weight of that sub-tree! which is Ω(N).

Q 2. What is the cost to rebalance the sub-tree rooted at node v if we know that that sub-tree has a weight of N?
A. Well, we already answered this above. The answer is Θ(N).

Q 3. How many sub-trees potentially go out of balance when you insert a node?
A. We know that a node is inserted at the leaf level, so potentially all the sub-trees that are rooted at the nodes on the leaf-to-root path with the newly inserted node as the leaf node can potentially go out of balance. This happens to be Θ(log N) nodes.

∴ the amortized cost to insert a new node into the balanced tree is:
Ω(N)/Θ(N) * Θ(log N) = Θ(log N).

Now, that's a fairly straight-forward algorithm to get the same (amortized) costs as the worst-case costs for updates with a more complicated beast such as an RB-Tree or an AVL-Tree. Though, I feel that Treaps are much simpler to implement.

Tuesday, March 05, 2013

Amortized Analysis or: How I learned to stop worrying and love averages

Amortized Analysis is usually seen as a pretty scary term, and I've seen a lot of people confuse it with Average Case Analysis, but let's try and de-mistyfy it, one step at a time.

We've performed amortized analysis at some point or another in our lives without actually knowing it. A few lame examples follow:
  • For the stock broker: Purchasing shares at a lower price to average out the cost price of all the holdings of a given stock
  • For the fitness enthusiast: Working out thrice as much at the gym today because [s]he missed 2 days before today
  • For the reader: Reading a few more pages of a book so that you can take a break tomorrow and still complete it on time
There are situations where amortized analysis doesn't work too. For example, suppose you're in charge of cooking for your family, you can't say that you'll cook thrice as much on the 3rd day from today and not cook at all between now and then. Sorry, but sometimes it just doesn't work!! These are cases where you must use worst-case bounds.

Let's try some exercise and learn by example!
  1. Ex-1: You're jogging 16 miles every day for 8 days, and your friend jogs 8 miles and 24 miles on every odd and even numbered day respectively (starting from day #1). Who jogs more over a period of 8 days? Here is a graphical representation of how much you and your friend ran over a period of 8 days:

  2. Ex-2:You're jogging 16 miles every day for 7 days, and your friend jogs in the following manner:
    DayMiles Jogged
    12
    22
    34
    48
    516
    632
    764
    Who jogs more over a period of 7 days?

  3. Ex-3: You're playing a game where you have a graph and you start at the node with the symbol S and finish at the node with the symbol F. The constraints on your moves are that you must take EXACTLY ONE blue coloured edge in every move, but you can take as many (or zero) red coloured edges in a move. A move contains a combination of red and blue edges.
    An example graph is shown here:
    One trace of a successful completion of a game that visited 11 blue edges and 4 red edges is shown here:
    Can you identify the maximum number of edges of any colour that will ever be visited if I tell you that exactly 107 blue edges were taken in a particular successful completion of the game?

If you got this far, and were able to solve all the exercises - congratulations! - you've understood what amortized analysis is all about! And as an added benefit, Ex-2 is how one would go about analyzing the insertion cost for Dynamic Arrays, and Ex-3 is actually how one would analyze the running time for the KMP string matching algorithm!

PV=nRT or: How I learned to stop worrying and love cooking under pressure

I'll try to be as brief as possible here. Just going to talk about the physics behind Pressure Coking and why it's so cool (and hipster). Hey, it saves fuel and hence damages the environment to a much lesser much lesser extent than does normal cooking.

From How Does A Pressure Cooker Work?: "Simply put, water boils at 212°F (100°C). At this point, no matter how long you continue to boil, it always stays the same temperature. As the water evaporates and becomes steam it is also the same temperature, 212°F.

The only way to make the steam hotter (and/or to boil the water at a higher temperature) is to put the system under pressure. This is what a pressure cooker does. If we fit an absolutely tight cover to the pan so no steam can escape while we continue to add heat, both the pressure and temperature inside the vessel will rise. The steam and water will both increase in temperature and pressure, and each fluid will be at the same temperature and pressure as the other. "


To explain the last paragraph above, let's turn to physics and the Ideal Gas Law, which states that PV=nRT where,
P=Pressure
T=Temperature
and we don't really care about what the other symbols mean.

This means that Pressure and Temperature vary directly with each other, and if you raise the pressure of a fluid, then the temperature at which it changes state will also increase!!

Saturday, February 23, 2013

Parvorder Platyrrhini or: How I learned to stop worrying and love monkeys

As usual, this is going to be short.


I was coming back by train from Charni Road to Churchgate after a swim at Mafatlal Bath when an eunuch with a baby monkey (maybe less than 2 years old) walked in and started asking for money.


At first, I just ignored them since I had something going on in my head, but eventually they got my attention when the monkey started acting acrobatic in the train and started swinging from pole to pole and climbing the inner walls of the railway carriage (bogie in India).


I walked towards them and quietly handed the owner a ₹5 coin (that's about $0.1). What followed was pure ecstasy! The monkey climbed on me starting from the feet upwards and used my shorts and tee-shirt as support. I found it lodged in my arms like I would hold a human baby. It then walked all over my shoulders and lodged itself on my head (I had just cut my hair super tiny, so I guess that helped). It sat there patiently; probably observing the people around. I'm sure that the others felt as if I was the owner of the monkey! Never ever have I had something like this happen to me, so I was absolutely overjoyed with the proceedings. I wasn't even worried about getting all dirty right after my swim because this was just as fantastic a feeling as I could (not) think of!


Eventually, it was time to step off the train and the owner of the monkey started tugging at its leash, but the monkey would not budge! ha! After a few sharp tugs though, the baby relented and it was time to part ways.


Reminds me of:

Thursday, February 21, 2013

A great day for freedom

Today, I took Abbas, my dear dear dear friend (and a Muslim) to a pool where for the longest time, only Hindus were permitted to congregate and bathe. Only for the last 2 years has this ban on other religions been lifted (which I felt was unhealthy to start off with). Sense has been knocked into certain quarters!


It feels quite liberating to be able to have the option of not worrying about religion, etc... before associating with people. It's a person for crying out loud!! I remember my days in school where we didn't know anything about the concept of religion and never ever let that feature in the equation of friendship or acquaintanceship.


Why did we let religion govern our lives and let it stop us from doing harmless things we otherwise would have?


हिन्दू, मुसलिम, सिख, इसाई हम सब भाई भाई हैं ।