Saturday, December 07, 2013
Better algorithms v/s micro-optimization
All the little ones (including me) went first (since the game was primarily meant to keep us busy). I was top scoring with a little over 70 bounces in 60 second. What I did to get there was try and go as fast as I possibly could without losing control of the ball, and focusing heavily on where the ball was at all times.
Then the grown-ups started and for a while no one was able to beat that score. Then one smart dude knelt down when the signal to start was given. Everyone else had already started bouncing there balls and were in to their second bounce, and this guy was taking his own time getting settled in his squatting position. When he was ready, he started bouncing the ball, and boy did he go fast! He had just out-smarted everyone else with a better algorithm for getting more bounces in the same time duration.
Better algorithms are like bicycles for the mind.
Before we had sorting algorithms that ran efficiently [O(n log n)], we had micro-optimizations applied to every known O(n2) sorting algorithm in an attempt to make it perform fewer comparisons, or exit early, and hence run faster. Fixing the algorithm, however, was the real game changer.
Saturday, October 26, 2013
Javascript as a language is really darn good
- Javascript is a small and minimal language with few (redundant) constructs and can be taught and remembered easily. This is useful for someone who is trying to use javascript to write code as well as someone who is trying to read javascript code. When you are expressing your ideas, you want to know what tools you have and want to be able to fit them in your head. At the same time, when you (or someone else) (re)visit(s) your code, you want the reverse transfer of information (i.e. code to idea) to be quick and unambiguous and the reader shouldn't get confused trying to figure out the language syntax and semantics. To explain,
- The fact that functions are first-class objects and that the lambda syntax isn't different (or otherwise special) compared to the standard function definition syntax helps.
- As does the fact that objects/maps (hashes), arrays, etc... all have the same get/set syntax.
- There's very little fluff in the language. Most keywords don't feel like they're useless. The only ones I find excessive are 'new', 'prototype', and 'function'. Coffeescript solves this to some extent.
Since there is no main() function in a node.js script, you don't need to encode all your executable code in a main() function. You won't believe how incredibly useful I find this even though I have programmed in C and am used to that style of programming. This also means that each file can have its own test() function that is run by just invoking:
$ node file_name.js.
Declaring arrays and objects (maps) in javascript is really easy, and anything that can be reasonable done in 1 line is done in one line. For example, declaring an array a that has 4 elements, 44, 31, 21, and 23 is:
var a = [ 44, 31, 21, 23 ];
Declaring an object o that maps key 'name' to 'blogger', key 'url' to 'http://blogger.com/', and key 'age' to 5 is as simple as:
var o = { 'name': 'blogger', 'url': 'http://blogger.com/', 'age': 5 };
Notice how values in maps can be of different types. This is also true of elements in an array.
Converting a string (or any other type) to a string isn't rocket science either.
var sint = String(89);
To test my belief, I've started writing a toy regular expression parser and evaluation engine in javascript that will have some intentional holes (un-impelemented features) that you can fill up later so that you can get a feel for both how regular expressions work, as well as how javascript as a language handles.
Sunday, April 21, 2013
Inside the guts of Kadane's algorithm OR Understanding the Maximum Sum Subarray Problem
Let's try to understand how it really works. If we are given the problem of finding the maximum sum sub-array of a given array, the first native approach we can try is the O(n2) algorithm of starting at every array index and computing the sum from that index to every index after it. This works, and gives us the correct answer, but we should ask ourselves if we can exploit certain properties that the problem might have to try and speed up the solution.
Let's try and dig deeper into an example.
Consider the following array & its corresponding cumulative sum subarray:
Element | 10 | -5 | -2 | 7 | 1 | -5 | -3 | 2 | 4 | -3 | 6 | -21 | 5 | -2 | 1 |
Cumulative Sum | 10 | 5 | 3 | 10 | 11 | 6 | 3 | 5 | 9 | 6 | 12 | -9 | -4 | -6 | -5 |
Some observations:
- A maximum sum sub-array can never have a negative number as one of the elements at the end-points, except of course if every element in the array is a negative number. If a maximum sum sub-array had a negative number on one of its end-points, we could remove that element and increase the value of the sum, thus getting a sub-array with a larger sum. Conversely, a maximum sum sub-array always starts and ends with a non-negative number, unless of course all the numbers in the array are negative.
- We can always clump all negative and non-negative numbers together since once we encounter a negative number, the sum always drops and once we encounter a non-negative number, the sum never drops.
- If the running sum of a sub-array array ever falls below zero, no solution will ever include the negative number that caused this sum to fall below zero since it over-powers the positive sum accumulated before it. Note: We only speak of 1 negative number because of the clumping point above.
- An extension of the point above implies that the new running sum that begins once a cumulative sum falls below zero always starts from the immediately following non-negative number.
MaxSum[i] is the Maximum Sum ending at index i.
MaxSum[i] = Array[i] if i = 0 MaxSum[i] = MaxSum[i-1] + Array[i] if MaxSum[i-1] + Array[i] >= 0 MaxSum[i] = 0 if MaxSum[i-1] + Array[i] < 0
The re-written array (which now consists strictly of alternating negative and non-negative numbers) and the new cumulative sum sub-array are (the row "Cumulative Sum" below represents the MaxSum variable above):
Index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 |
Element | 10 | -7 | 8 | -8 | 6 | -3 | 6 | -21 | 5 | -2 | 1 |
Cumulative Sum | 10 | 3 | 11 | 3 | 9 | 6 | 12 | 0 | 5 | 3 | 4 |
Sunday, March 17, 2013
How to give a talk/presentation - Wisdom from Dr. Bender
This Spring, I took a course called CSE638 (Advanced Algorithms) under Professor Michael Bender. This was a mostly research oriented course, with a focus on doctoral students. An important part of doing Academic Research is talking/presenting your material in conferences and seminars. Professor Bender spent a couple of classes discussing this, and I thought I'll list down the stuff he mentioned - more of a note to myself.
- Make the talk prefix-competitive
If somebody dozes off, the person should have got the best part of your presentation/content before he/she dozed off. Bring the best part of your talk to the front. Make it prefix-competitive.
- Stand in front of the screen, Smile :)
Do not stand away from the screen, in some corner behind the podium. Stand in front of the screen. Let the Projector's display glow on your face. Stand confidently in front of the screen - okay now, don't block the screen, but stand to the edge of the screen, and Smile while you speak. Convey enthusiasm. "People like seeing faces". Facebook has become popular for a reason ;)
- Diagrams and Figures!
Have plenty of diagrams and figures on each slide.
- Refer to every slide & everything on each slide
Refer to each and every item on your slide. If there is a diagram, explain everything. If you're not going to explain something, refer to it and say that you're not going to explain it. Either way, please do refer to everything on your slides!
- Touch the screen, point to it
Don't be afraid of the screen. Touch it. While explaining the diagrams and figures, stand over the screen and explain stuff by physically touching the screen with your fingers/hands. That very act conveys confidence.
- Results Up Front
If your talk is about the results, let the results be as far upfront as possible. Build the context as soon as possible, and announce what you did.
- Explain the Title
Spend time explaining the title. And the background. This might seem to conflict with point-4, but it's not. That's the trick.
- Give credit wherever possible, and do this in the beginning
When giving credit to others, please try to do this when you start rather than when you end. People like hearing about other people. Did I mention something about facebook earlier?
- Explain why the problem is important
This I feel is one of the biggest take-aways from the talk. Even if the audience doesn't understand the solution, they should understand why we need a solution in the first place.
- Make use of plots effectively
Explain the axes, and know what to plot. Refer to this post by Gaurav to know more about what this means.
- Know your audience
For example, presenting the workings of a toaster to a homemaker is different from presenting it to an electrical engineer. You'll need to motivate the problem differently in both cases. Same goes with presenting it to someone who has had a toast before v/s someone who hasn't. I find that Dr. Dan Gusfield does a brilliant job of motivating problems before presenting the solutions (related to point-7).
- There are a couple more about Jokes, Color Schemes, etc. which I can't recall.
PS - If you're from Stony Brook, and find that I've missed something, feel free to write in the comments. Thanks!
Update: Found this article online.
Thursday, March 14, 2013
PMA: The Packed Memory Array
The Packed Memory Array (pdf) is an interesting structure to say the least, since it keeps its elements almost contiguously (and in sorted order) in memory with no additional indirection via pointers like a usual array and still allows one to perform fast - O(log2N) - updates to the array.
We know that inserting elements in sorted order into an array without keeping gaps between consecutive elements costs O(n) per insert, whereas searching for an element can be done using binary search with a cost of O(log n). These are tight upper bounds, but the story is a little different for randomly arranged data. If one is inserting random data into a sorted array with gaps being maintained between consecutive elements, the expected time to insert a single element magically falls to O(log n)! Now, what just happened here? To find out, read more in Bender, Colton, and Mosteiro's paper titled Insertion Sort is O(n log n) (pdf). On the other hand, if we don't permit gaps between elements, even for random data being inserted, the amortized cost for inserting n elements into an array in sorted order is O(n2) - why? (hint: Refer to the expected case analysis of quick-sort).
The simple idea is to not pack all elements together, but to maintain some gap between consecutive elements. We shall see that if we follow this simple idea, then the cost for insertion falls to O(log2n) amortized worst-case. This is the packed-memory-array (PMA). We however need to formalize the idea a bit and set some rules of the game before we get ahead of ourselves.
We'll start off by assuming that we already have a PMA that holds n valid elements. One of the invariants we have for the PMA is that it should be more than 0.25x full (this is called the fullness threshold). i.e. If the PMA has space for 4n elements, then there should be at least n actual elements in the PMA. Any less and we should re-size the PMA to have space for 2n (not n) elements (this is also part of the fullness threshold). The reason we maintain extra space in the PMA is so that we can re-balance and that re-balances involving a lot of elements won't happen too frequently.
Let's just focus on insertions for now. The PMA is organized as a contiguous array of slots which might be used or free. Conceptually, we break this array of size N into N/log N blocks, with each block holding log N elements. We'll see why this is helpful. If we look at a PMA as being made up of such blocks of size log N each, then we can view the PMA as binary tree (conceptually) with each level having different fullness thresholds.
The algorithm for inserting elements relies heavily on the upper density threshold whereas the algorithm for deleting elements relies heavily on the lower density thresholds. For the sake of brevity, I shall only discuss insertion (not deletion).
Algorithm: When we insert elements into the PMA, we follow these steps:
- Locate the position to insert the element into. We either know this before-hand or we perform a binary search which costs O(log2N).
- If the cell we want to insert into is free, we just add the element, and mark the cell as used. We are done!
- If however, the cell is used, we compute the upper density threshold for the smallest block (of size log N) that the desired cell falls within, and check if the upper density threshold would be violated. If we notice that there is no violation, we just re-balance all elements including the new one into that block. We are done. If we violate the upper density threshold, we consider a block twice as large (which includes the cell we will be inserting into) and check if the density threshold is violated. We repeatedly move up till we find a chunk for which the upper density threshold is not violated.
- If we fail to find such a chunk, we just allocate an array twice as large and neatly copy all the existing elements into the new array with constant sized gaps between elements!
Pre-requisite: Weight balance for fun and profit
- The upper (and lower) density thresholds are arranged so that they grow arithmetically from the top (root) level to the bottom (leaf) level.
- The difference in density thresholds is 0.5, and we have log N levels, so if we want to maintain a constant arithmetic difference between levels, there must be a difference of 0.5/log N between each level. This is a difference of Δ = O(1/log N) between each level.
- Q. What is number of elements that should be inserted at a certain level to bring it out of balance?
A. Clearly, if a level has room for N elements, and it is out of balance then that could have happened only if it went from being in balance to now out of balance, which means that O(ΔN) elements were inserted into this level. - Q. What is the number of element moves we need to bring a level back into balance?
A. If a level is out of balance, we typically go up till a level within density thresholds is located and re-balance it. Ideally, going up one level should do the trick, so to re-balance a level containing N elements, Θ(N) operations are sufficient. - Therefore, the amortized cost to re-balance a level is O(N / ΔN) = O(log N).
- However, we must not forget that an element insertion affects the thresholds of O(log N) levels, which means that the actual cost for insertion is O(log2N).
Q1. What if we use space proportional to Θ(Nc) (assume c=2) to store N elements? What happens to the running time for insert?
A1. Well, it just goes over the roof since you're now going to be moving elements across a lot of unused cells while you re-balance the array. Additionally, you'll also need to to adjust your level density thresholds to not be arithmetically increasing, but geometrically increasing. Instead, if you use tagging and maintain elements as tags and pointers to actual values, you can get better running times if the tag space is polynomial (Nc) in the number of elements in the structure.
Q2. Is the PMA a cache-oblivious data structure?
A2. The PMA is Cache Oblivious, and is used as a building block in other more complex external memory data structures such as the Cache-Oblivious B-Tree.
Implementation: You can find a sample implementation of the PMA here.
Friday, March 08, 2013
Weight Balance for fun and profit
Pre-requisite: This note on Amortized Analysis.
We'll use weight balance to implement a dictionary structure and examine how the guts of one such structure, the weight-balanced tree work.
A dictionary data structure is one that supports the following dictionary data structure operations:
- Insert
- Delete
- Find
- Predecessor
- Successor
Now, you might have heard of the following data structures that (efficiently) support the operations mentioned above:
It would surprise (or maybe not) you to know that both these structures work on the principle (guess) of weight-balance!!
So what exactly do we mean when we talk about the weight of a sub-tree in a BST? Well, as it turns out, the weight of the sub-tree in a BST is just the count of the number of nodes in the sub-tree rooted at that node (including the node itself).
For example, the following tree (image courtesy wikipedia) has a weight of 9
A weight-balanced tree rooted at node u is one in which (either):
- The weights of the left and right children of a sub-tree are within constant factors of each other:
weight(Left-Child(u)) + 1 = Θ(weight(Right-Child(u) + 1)
Note that the +1 is important for pedantic reasons as far as the order-notation is concerned.
OR - The weights of the left and right children of a sub-tree are within constant factors of the weight of the complete sub-tree
weight(Left-Child(u)) + 1 = Θ(weight(u) + 1) AND
weight(Right-Child(u)) + 1 = Θ(weight(u) + 1)
More realistically, if we stick to the second definition, we have:
weight(Child(u)) + 1 ≥ 0.25 * (weight(u) + 1) AND
weight(Child(u)) + 1 ≤ 0.75 * (weight(u) + 1)
where, Child(u) denotes both the left & right child of u.
For example, if we consider the following example tree, which is clearly out of weight-balance (don't ask me how we got there because this example is made-up), we re-balance it to be perfectly in balance (if we have an odd number of nodes or almost perfectly balanced otherwise).
We should be careful about how we re-balance these N nodes, because if the cost is any worse than Θ(N), then we won't get the update costs that we desire. The easiest way to perform the re-balance with a cost of Θ(N) is to perform an in-order traversal of the subtree rooted at node u, and write out the sorted nodes to an array. We can then re-create a perfectly balanced BST from that array either using recursion or the Day–Stout–Warren algorithm.
This is where the fun starts!!
Q 1. How many nodes need to be inserted under a sub-tree rooted at node v to bring the sub-tree out of balance (assuming it is perfectly balanced to start off with)? Let's assume that the sub-tree originally contains N nodes.
A. You need to insert some constant fraction of the weight of that sub-tree! which is Ω(N).
Q 2. What is the cost to rebalance the sub-tree rooted at node v if we know that that sub-tree has a weight of N?
A. Well, we already answered this above. The answer is Θ(N).
Q 3. How many sub-trees potentially go out of balance when you insert a node?
A. We know that a node is inserted at the leaf level, so potentially all the sub-trees that are rooted at the nodes on the leaf-to-root path with the newly inserted node as the leaf node can potentially go out of balance. This happens to be Θ(log N) nodes.
∴ the amortized cost to insert a new node into the balanced tree is:
Ω(N)/Θ(N) * Θ(log N) = Θ(log N).
Now, that's a fairly straight-forward algorithm to get the same (amortized) costs as the worst-case costs for updates with a more complicated beast such as an RB-Tree or an AVL-Tree. Though, I feel that Treaps are much simpler to implement.
Tuesday, March 05, 2013
Amortized Analysis or: How I learned to stop worrying and love averages
We've performed amortized analysis at some point or another in our lives without actually knowing it. A few lame examples follow:
- For the stock broker: Purchasing shares at a lower price to average out the cost price of all the holdings of a given stock
- For the fitness enthusiast: Working out thrice as much at the gym today because [s]he missed 2 days before today
- For the reader: Reading a few more pages of a book so that you can take a break tomorrow and still complete it on time
- Ex-1: You're jogging 16 miles every day for 8 days, and your friend jogs 8 miles and 24 miles on every odd and even numbered day respectively (starting from day #1). Who jogs more over a period of 8 days? Here is a graphical representation of how much you and your friend ran over a period of 8 days:
- Ex-2:You're jogging 16 miles every day for 7 days, and your friend jogs in the following manner:
Day Miles Jogged 1 2 2 2 3 4 4 8 5 16 6 32 7 64 - Ex-3: You're playing a game where you have a graph and you start at the node with the symbol S and finish at the node with the symbol F. The constraints on your moves are that you must take EXACTLY ONE blue coloured edge in every move, but you can take as many (or zero) red coloured edges in a move. A move contains a combination of red and blue edges.
An example graph is shown here:
If you got this far, and were able to solve all the exercises - congratulations! - you've understood what amortized analysis is all about! And as an added benefit, Ex-2 is how one would go about analyzing the insertion cost for Dynamic Arrays, and Ex-3 is actually how one would analyze the running time for the KMP string matching algorithm!
PV=nRT or: How I learned to stop worrying and love cooking under pressure
From How Does A Pressure Cooker Work?: "Simply put, water boils at 212°F (100°C). At this point, no matter how long you continue to boil, it always stays the same temperature. As the water evaporates and becomes steam it is also the same temperature, 212°F.
The only way to make the steam hotter (and/or to boil the water at a higher temperature) is to put the system under pressure. This is what a pressure cooker does. If we fit an absolutely tight cover to the pan so no steam can escape while we continue to add heat, both the pressure and temperature inside the vessel will rise. The steam and water will both increase in temperature and pressure, and each fluid will be at the same temperature and pressure as the other. "
To explain the last paragraph above, let's turn to physics and the Ideal Gas Law, which states that PV=nRT where,
P | = | Pressure |
T | = | Temperature |
This means that Pressure and Temperature vary directly with each other, and if you raise the pressure of a fluid, then the temperature at which it changes state will also increase!!
Saturday, February 23, 2013
Parvorder Platyrrhini or: How I learned to stop worrying and love monkeys
As usual, this is going to be short.
I was coming back by train from Charni Road to Churchgate after a swim at Mafatlal Bath when an eunuch with a baby monkey (maybe less than 2 years old) walked in and started asking for money.
At first, I just ignored them since I had something going on in my head, but eventually they got my attention when the monkey started acting acrobatic in the train and started swinging from pole to pole and climbing the inner walls of the railway carriage (bogie in India).
I walked towards them and quietly handed the owner a ₹5 coin (that's about $0.1). What followed was pure ecstasy! The monkey climbed on me starting from the feet upwards and used my shorts and tee-shirt as support. I found it lodged in my arms like I would hold a human baby. It then walked all over my shoulders and lodged itself on my head (I had just cut my hair super tiny, so I guess that helped). It sat there patiently; probably observing the people around. I'm sure that the others felt as if I was the owner of the monkey! Never ever have I had something like this happen to me, so I was absolutely overjoyed with the proceedings. I wasn't even worried about getting all dirty right after my swim because this was just as fantastic a feeling as I could (not) think of!
Eventually, it was time to step off the train and the owner of the monkey started tugging at its leash, but the monkey would not budge! ha! After a few sharp tugs though, the baby relented and it was time to part ways.
Thursday, February 21, 2013
A great day for freedom
Today, I took Abbas, my dear dear dear friend (and a Muslim) to a pool where for the longest time, only Hindus were permitted to congregate and bathe. Only for the last 2 years has this ban on other religions been lifted (which I felt was unhealthy to start off with). Sense has been knocked into certain quarters!
It feels quite liberating to be able to have the option of not worrying about religion, etc... before associating with people. It's a person for crying out loud!! I remember my days in school where we didn't know anything about the concept of religion and never ever let that feature in the equation of friendship or acquaintanceship.
Why did we let religion govern our lives and let it stop us from doing harmless things we otherwise would have?
हिन्दू, मुसलिम, सिख, इसाई हम सब भाई भाई हैं ।