Sunday, December 09, 2012

Better Index Compression

It's late, so I'll keep it short and to the point.

An index is something that lets you get at data quickly. I consider a B-Tree as well as a sorted array an index. The time to lookup an element in an array is O(log2 N) whereas that in a B-Tree is O(logB N), where N & B mean their usual things.

  1. If we just consider static structures (sorted array for example), and we wish to compress the data but maintain the retrieval speed, one can think of compressing blocks of size B and then performing a binary search over blocks. There are O(N/B) such blocks, so the cost of performing a binary search is O(log2 N/B), which involves decompressing O(log2 (N/B)) blocks of size B each.

    Instead of B, if we chose O(log2 N) elements to fit in a block (intuitively, assume that O(log2 N) = kB, for some constant k > 0), we land up with O(N/log2 N) blocks, each of size O(log2 N), and the asymptotic cost to query remains the same, but we decompress O(log2 (N/log2 N)) blocks this time, which is asymptotically better than before.
  2. If OTOH, we sample out every O(log2 N)th element from the sorted array, and create an uncompressed array from these elements, and make each element point to a compressed block containing all the elements between this element and the next element, then each element points to a compressed block containing O(log2 N) elements.

    Queries can be answered by performing a normal binary search on the array of sampled elements, and then at most 1 decompression operation is sufficient to locate the element in the compressed block. This is better than the O(log2 (N/log2 N)) blocks we had to decompress in the earlier scheme. The asymptotic cost of a query remains the same but we gain a lot by performing exponentially fewer decompression operations!!

    Credits to Anirban for this explanation: Another way to look at it is to imagine a B-Tree built from these sampled elements, and each node in this B-Tree points to a compressed block containing O(log2 N) elements. Alternatively, this can also be seen as a B-Tree built in such a way that internal nodes are stored uncompressed, but the leaf nodes are stored compressed.

Monday, November 05, 2012

Twitter API: Y U No respond fast & why

There is a very interesting post about an application developer running into issues with the Twitter API timeout of 5 sec. I would encourage everyone to read it if you haven't already.

Based on what I read (and some assumptions), I am going to try and comment on what the problem could be and potential solutions. I am greatly interested in criticisms of the analysis and possibly different ways of looking at the problem and possible solutions to it.

[A] Assumption: Twitter stores tweets for its users in an Inbox for every user. So, every time someone I am following makes a tweet, it is copied into my Inbox. This means that if 1M people follow me, my tweet is copied to the Inbox of 1M people. Let's call this value the fanout of the tweet.

Every time twitter needs to copy a tweet I make to the Inboxes of my followers, it needs to fetch a list of people that are following me. It does this by querying the database and fetching the list of people that have me as one of the people they are following. MySql (and other databases) do not allow you to perform efficient offset based queries, and you need to either:
  1. Use cursors (which tie up server resources for the duration of the complete request) or
  2. Perform a complete scan of all records till the requested record offset
every time an offset based query is made.

This is fairly wasteful of resources, whichever way you look at it, when in fact better solutions exist as mentioned in the post above.

[B] Assumption: When you perform an API request to Twitter to fetch the list of followers for a particular user, I am guessing that Twitter looks up a cache to locate the list and if it is unable to find it in the cache, it goes and looks up the DB for the list of these followers.

The reason that the API returns only the list of follower IDs (instead of complete user information) is so that they can avoid Point Queries which are notoriously inefficient. Of course, network bandwidth could be another reason.

[C] Now that we have established the parameters and behaviour of the system with which we are dealing, we can try and think of a solution.

  1. Ensure that the list of followers we want to query are already in the cache so that we don't query the DB for the list. This can be done by making a web-request to the API without your credentials and then making the same request again with your credentials so that you aren't charged for the request that loads data into the cache. This will not always work, and Twitter might ban your IP.

  2. Binary search over the size of the result set that twitter can reliably return for a user. Start with 1 follower, and ramp up to 2, 4, 8, 16, etc... (double) on every request till twitter times out. Subsequently, perform a binary search between the size that failed (say 128) and the size the succeeded (say 256). You are guaranteed to have fetched 36 follower IDs/request before you hit the request that timed out. Suppose the upper limit was 128, you will not get anything in the subsequent 8 requests, which means you still got an average of 17 follower IDs/request. Store this number, and use this number to start the next time you perform requests for this user.

  3. Randomized solution: If you get 2 successive failures [timeouts] for a user, give up and try another user. With high probability, you will be able to query the followers of O(log n) users if you try to query O(c log n) distinct random users. This is because the probability of failure shrinks rapidly as you get consecutive failures for distinct users.

Do let me know if you spot something wrong or have other solutions.

Saturday, August 11, 2012

The words are not enough

In an earlier post I hinted at creating a book which teaches algorithms & data structures primarily using pictures. Here is a start. Caveat: Start from the bottom of the pin board to see the older posts first and work your way to the top.

The way I see it being used is that it should provide enough intuition that people should be able to look at the pictures and get the gist of the algorithm or data structure in under 3 minutes. You can always find more information (details) about the algorithm or data structure on other places on the internet. This book hopes to act as a place where one can:
  • Develop intuition about algorithms & data structures
  • View the guts of an algorithm or data structure graphically for easier recall
  • Quickly glance at the picture of an algorithm or data structure to recollect the underlying idea behind it (revision for interviews, exams, etc...)
I'll be extremely happy to receive feedback of any kind on this (what I think is a unique) way to learn algorithms and data structures so that I may progressively improve the quality of the content.

Thursday, August 02, 2012

SQL Query Execution Optimization opportunities missed: Part-1

There are some common queries that are used almost everywhere, but they seem to be not executed very cleverly by most (all??) SQL execution engines.

An example of one such query is the quintessential pagination query. Assume you have a schema for comments that looks like this:


CREATE TABLE COMMENTS(post_id INT, 
  comment_id INT, 
  comment_text TEXT,
  poster_id INT,
  posted_at TIMESTAMP,
  PRIMARY KEY (comment_id),
  UNIQUE KEY (post_id, posted_at, comment_id)
);


Now, you display the first page (say the most recent 10 comments) for a post (with id=33) using a query such as:

SELECT * FROM COMMENTS 
  WHERE post_id=33 
  ORDER BY posted_at DESC
  LIMIT 0,10;


The LIMIT 0,10 syntax fetches at most 10 comments from offset 0 in the result set.

The most natural way to fetch the 2nd page would be to say:

SELECT * FROM COMMENTS 
  WHERE post_id=33 
  ORDER BY posted_at DESC
  LIMIT 10,10;


Similarly, you can get the 3rd page by saying:

SELECT * FROM COMMENTS 
  WHERE post_id=33 
  ORDER BY posted_at DESC
  LIMIT 20,10;


and so on...

You must have noticed that in case LIMIT 1000,10 is specified, a naive execution of the query plan involves fetching all the 1000 rows before the 10 interesting rows are returned by the database. In fact, most databases, will do just that. There is in fact a better way to execute such queries.

(Let's leave aside the discussion of whether pagination queries should be implemented this way. The observant reader will notice that we have an index on (post_id, posted_at, comment_id), and that can be used to fetch the next 10 results given the post_id, posted_at, and comment_id of the immediately preceding comment).

We know that most (of not all) databases use B-Trees to store indexed information. It is easy to augment the B-Tree to hold information such as "how many elements under this node (including the node itself)?" This information alone will let us very quickly (in O(log n) I/O look-ups) locate the node of interest. Suppose we want the entry from the COMMENTS table at offset 1000 with post_id=33, we will first perform a look-up for the first entry with post_id=33. Once we find this key, we can quickly (in O(1) time) determine how many entries are less than the located entry (since we already have this information cached at each node). Let the found node be at offset OFF1. Subsequently, we can query the B-Tree to find the entry that has an offset of OFF1 + 1000 (since we have the cumulative counts cached for every value in the B-Tree node!).

For example, consider the Binary Tree shown below:



The alphabets in the top half of every node are the keys and the number in the bottom half of the node is the count of the number of nodes in the sub-tree rooted at that node.

We can answer the following query on this tree in O(log n) time:

SELECT KEY FROM BINARY_TREE
  WHERE KEY > D
  ORDER BY KEY
  LIMIT 3,2;


i.e. We want to fetch the 5 smallest keys greater than 'D', but we want only the last 2 from this set. i.e. The 2 greatest keys that are a subset of the set containing the 5 smallest keys greater than 'D' (read that a few times to get a hang of it).

We proceed as follows:
  1. Find D
  2. Find Rank(D) = 4
  3. Find the KEY such that Rank(K) = Rank(D)+3 = 7. This happens to be the KEY 'G'
  4. Perform an in-order traversal to find the in-order successor of the node that we are on till we either run out of nodes or we have walked 2 nodes (whichever comes first)
Note: To compute the rank of a node we use the following formula:

Rank(Node) = SubtreeSize(Node->Left) + 1 [if Node->Left exists]
Rank(Node) = 1                           [otherwise]


Hence, we can see that we are able to satisfy this query in time O(log n + Result_Set_Size), where n is the number of elements in the Binary Tree.

Monday, July 09, 2012

Why re-writing software is sometimes better than fixing the old stuff

This blog post is mostly about unstructured, un-thought-out, and mostly naive and fleeting thoughts about the software "industry" at large.

We always tend to build on what others have done, and hence stand on the proverbial shoulders of giants. This ensures that some steady progress is always made in the forward direction. Imagine every generation trying to start with the invention of the wheel. The car would probably never have been.

Simulated annealing on the other tends to suggest that while finding a solution to a problem, one might get stuck at a local-maximum, which in practice translates to architectural bottlenecks that a software might have, which prevent it's furtherance in the direction of positive progress. These are times when one needs to take a step back and look at the bigger picture, and probably try to re-architect the solution to the problem, or maybe even revisit the problem statement and see if it still is valid and holds in the current scenario.

(just realized my sentences are getting too long)

For example, if you try to scale a multi-threaded web-server, you've essentially run into a bottleneck with the number of threads/processes or the amount of memory you have (due to the per-thread/process memory overhead). However, if you take a step back and question the multi-threaded-ness of the web-server, you can think of select(2) or epoll(2) based solutions that use asynchronous I/O and do I/O multiplexing to handle thousands of simultaneous connections in a single thread.

It takes a lot of experience, good judgement, guts, and a sound-technical skull to know when to build on existing stuff and when to say "NO" and re-write the damn thing.

An algorithms book which mostly works using pictures

My algorithms professor, Dr. Michael Bender mentioned that explaining algorithms & data structures using pictures/diagrams/visuals, with text complementing them is is much more effective than teaching them using textual matter with diagrams complementing the text.

Hopefully sooner than later, I plan to create a pin board on pintrest with such pictures that will make your wildest data structures dreams come true! (stay tuned for the link) I've put it up here: Algorithms & data structures in pictures.

Dr. Bender actually manager to teach a whole class worth of students Van Emde Boas Trees in about an hour using this diagrammatic approach. In fact, a lot of the tricks were suggested by the students themselves because the way the structure was pictorially shown on the board.

Monday, July 02, 2012

[What If] An ad-supported MySql

Let's suppose that mysql was ad-supported. What would typical interactions with it look like?

mysql> CREATE TABLE table01(id INT, 
    name VARCHAR(400), 
    created TIMESTAMP default CURRENT_TIMESTAMP);
Query OK, 0 rows affected, 1 warning (0.06 sec)

mysql> show warnings;
+-------+------------+---------------------------------------+
| Level | Advertizer | Message                               |
+-------+------------+---------------------------------------+
| Ad    | Foo Bar    | Visit our store at http://foobar.com/ |
+-------+------------+---------------------------------------+
1 row in set (0.00 sec)

mysql> INSERT INTO table01 (id, name) VALUES(100, 'Billy Joel');
Query OK, 1 rows affected (0.02 sec)

...
...
...

mysql> SELECT * FROM table01 LIMIT 3
+-----+------------------------------------------------+---------------------+
| id  | name                                           | created             |
+-----+------------------------------------------------+---------------------+
| 0   | Sponsored Row - Visit us at http://bazbaz.com/ | 2006-04-06 17:17:49 |
| 100 | Billy Joel                                     | 2012-01-02 15:27:33 |
| 101 | Pink Floyd                                     | 2012-01-02 15:27:40 |
+-----+------------------------------------------------+---------------------+
3 rows in set (0.02 sec)

mysql> SELECT * FROM table01 LIMIT 30
+-----+------------------------------------------------------------+---------------------+
| id  | name                                                       | created             |
+-----+------------------------------------------------------------+---------------------+
| 0   | Sponsored Row - Visit us at http://bazbaz.com/             | 2006-04-06 17:17:49 |
| 100 | Billy Joel                                                 | 2012-01-02 15:27:33 |
| 101 | Pink Floyd                                                 | 2012-01-02 15:27:40 |
| 102 | Dire Straits                                               | 2012-01-02 15:27:45 |
| 0   | To see more than 4 rows at a time, visit http://mysql.com/ | 2012-01-02 20:00:07 |
+-----+------------------------------------------------------------+---------------------+
5 rows in set (0.00 sec)


Well, thank FSF for Free & open Source Software!!

Monday, May 07, 2012

Maharashtrian Curd Rice (Dahi Bhat)

Summer is here, and so is a tasty and easy to make dish - Maharashtrian Curd Rice. I remember having this during my stay at Pune and I absolutely loved it, so I decided to make it at home, and it turned out pretty good. Here is the recipe in text and video and a link to the youtube video for those of you who want to try it too!

Enjoy!

Wednesday, February 15, 2012

Coffee (flavoured) Yogurt

It's really simple to make, delicious to eat, and a healthy food!

Procedure:
  • Take about 1/2 tsp. instant coffee powder (Nescafe, etc...) and make a fine powder with your fingers. Put the powder in a bowl.
  • Take about 21/2 tsp. sugar and add to the same bowl.
  • Take about 1 cup (240 ml) of cold yogurt and mix thoroughly with the coffee powder and sugar in the bowl using a spoon.
  • Once it has formed a light brown homogenous paste, eat to your heart's content!
  • Enjoy!

Saturday, February 04, 2012

Broccoli Delights!!

Broccoli goes really well with Ginger, Garlic, Butter, Olive Oil, Parsley, Basil and Oregano. Any dish that includes these ingredients must necessarily be heavenly!

Here are some recipes that sound delicious!

Tuesday, January 24, 2012

SQL Query Optimization - Part-9 (Covering Indexes)

To speed up query performance, use a Covering Index that covers (indexes) not just columns that will be used in the WHERE clause, but also columns that will be used in the projection list of the SELECT query.

See this video for a more detailed explanation.

Sunday, January 08, 2012

Solve this recurrence

T(n) = T(n - √n) + T(√n) + n

Post your answers/solutions in the comments. We were asked this in the final exam in CSE-548 (Analysis Of Algorithms taught by Dr. Michael Bender).

Ayurvedic Way of Life

Follow the principles mentioned here and you should be good to go!!

Also, here is another very informative article about the ayurvedic view on food.

Broccoli should be had fresh!!

Broccoli should be had fresh and not frozen.
  1. Frozen Broccoli is not as firm as Fresh Broccoli
  2. Frozen Broccoli doesn't taste as good as Fresh Broccoli
  3. Frozen Broccoli doesn't contain the broccoli stalk which is not only more nutritious, but also more fibrous than the brocolli florets
  4. According to this page, "frozen broccoli has twice as much sodium as fresh (up to 68 mg per 10 oz. package), about half the calcium, and smaller amounts of iron, thiamin, riboflavin, and vitamin C."
Peas on the other hand should be had frozen since (from here), "Peas are high in sugar content, so they have to be blanched before freezing to stop enzyme activity from turning sugars into starch and ruining their fresh sweet flavor."

Focaccia reprise

Made Focaccia Bread again after about 2 years of absence from baking. Contains garlic, thyme, oregano, dried basil and olive oil. This is how it turned out:
Ingredients, motivation and housing courtesy Poorvi Matani, and oven & electricity courtesy Panna Parikh. A big thanks to my teachers at IHM and my mom who encouraged my culinary streak ;)

This version lacked rosemary since I didn't have ready access to any. Next batch should have that plus olives, sun-dried tomatoes and maybe some other veggies.

For the complete recipe:

  1. Prepare a mixture of all the herbs (seasonings, namely thyme, oregano, basil) and virgin olive oil. You can use fresh basil instead of dried basil
  2. Saute garlic (important step) in butter and add to the olive oil after it's somewhat cooled
  3. Let this sit in a closed container for anywhere between 4 hours to 1 week
  4. Make the bread dough as explained in this post (or any other bread dough recipe that you are comfortable with)
  5. After the first rise (step-7 in the post), knead in the olive oil and seasoning mixture into the dough, and let it rise a 2nd time in a flat pan (as shown in the picture above)
  6. After it has almost doubled in size, plonk it carefully into a pre-heated oven (410F) for ~15-20 minutes or till it's done
  7. Enjoy!


The last post about something similar was this recipe for Garlic & Herb Chapati.