This blog post is mostly about unstructured, un-thought-out, and mostly naive and fleeting thoughts about the software "industry" at large.
We always tend to build on what others have done, and hence stand on the proverbial shoulders of giants. This ensures that some steady progress is always made in the forward direction. Imagine every generation trying to start with the invention of the wheel. The car would probably never have been.
Simulated annealing on the other tends to suggest that while finding a solution to a problem, one might get stuck at a local-maximum, which in practice translates to architectural bottlenecks that a software might have, which prevent it's furtherance in the direction of positive progress. These are times when one needs to take a step back and look at the bigger picture, and probably try to re-architect the solution to the problem, or maybe even revisit the problem statement and see if it still is valid and holds in the current scenario.
(just realized my sentences are getting too long)
For example, if you try to scale a multi-threaded web-server, you've essentially run into a bottleneck with the number of threads/processes or the amount of memory you have (due to the per-thread/process memory overhead). However, if you take a step back and question the multi-threaded-ness of the web-server, you can think of select(2) or epoll(2) based solutions that use asynchronous I/O and do I/O multiplexing to handle thousands of simultaneous connections in a single thread.
It takes a lot of experience, good judgement, guts, and a sound-technical skull to know when to build on existing stuff and when to say "NO" and re-write the damn thing.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment