Start counting on the fingers on one hand thusly: 1 thumb, 2 index finger, 3 middle finger, 4 ring finger, 5 little finger, 6 ring finger, 7middle finger, 8 index finger, 9 thumb and so on.
Which finger would you end up on if you were counting to 123456789? That over 123 million in case you think it's a trick question. Assume that you aren't carted of to the funny farm while doing it.
Answer:
A complete cycle starting at thumb = 1 ends at thumb = 9. So you add 8 each time you return to the thumb. So you only need to deal with the remainder after dividing by 8. For 123456789 the remainder after dividing by 8 is 5 and that corresponds to the little finger.
This blog is for the people who are interested to learn exceptional thing in C++. Learner can find new things here and experts can share there knowledge.
Saturday, September 17, 2011
Saturday, June 4, 2011
Multiply a number by 7 without using * and + operator
int Num = n;
int NewNum;
NewNum = Num << 3; // mulitplied by 2 ^ 3 = 8
NewNum = NewNum - Num; // 8 – 1 = 7 if n = 1
Covert a string to upper case
void ToUpper(char * S)
{
while (*S!=0)
{
*S=(*S >= 'a' && *S <= 'z')?(*S-'a'+'A'):*S;
S++;
}
}
Reverse a string
void ReverseString (char *String)
{
char *Begin = String;
char *End = String + strlen(String) - 1;
char TempChar = '\0';
while (Begin < End)
{
TempChar = *Begin;
*Begin = *End;
*End = TempChar;
Begin++;
End--;
}
}
Puzzle 3: The 8 ball problem
The problem is quite simple. You are given 8 identical-looking balls, one of which is heavier than the other 7 (all of which weigh the same). Using an old-fashioned mechanical set of scales you must identify the heavier ball using the scale as few times as possible. The scale is constructed using two bowls and an arm enabling the bowls to either balance or have one bowl rising while the other (and heavier bowl) falling. You can't just add one ball at a time thinking its one weighing, however, you may put any number of balls in each bowl... All you need to solve the puzzle is to use a bit of common sense.
Solution:
You can identify the heavier ball in only 2 weighings!!!!
To achieve this in only 2 weighings, you first put 3 balls in each bowl on the scale, e.g. {1,2,3} against {4,5,6}. Should the scale balance, you have only 2 balls remaining which you can compare by putting each in a separate bowl on the scale, e.g. {7} against {8}. Should the scale not balance, however, take the 3 balls from the heavier bowl on the scale (e.g. 1,2,3). Pick any 2 balls and compare these against each other, e.g. {1} against {2}. If the scale balances, you know it is ball 3 is the heavier. Is the scale moving, you know it's the ball on the heavier side.
Solution:
You can identify the heavier ball in only 2 weighings!!!!
To achieve this in only 2 weighings, you first put 3 balls in each bowl on the scale, e.g. {1,2,3} against {4,5,6}. Should the scale balance, you have only 2 balls remaining which you can compare by putting each in a separate bowl on the scale, e.g. {7} against {8}. Should the scale not balance, however, take the 3 balls from the heavier bowl on the scale (e.g. 1,2,3). Pick any 2 balls and compare these against each other, e.g. {1} against {2}. If the scale balances, you know it is ball 3 is the heavier. Is the scale moving, you know it's the ball on the heavier side.
Puzzle 2 : 100 coins and 7 Bags
A dealer has 100 coins and only 7 money bags. He has to divide the coins over the seven bags so that he can make any number of coins simply by handing over a few bags. How must he divide the money over the seven money bags?
Solution:
Bag 2 = 02 ( 2^1 )
Bag 3 = 04 ( 2^2 )
Bag 4 = 08 ( 2^3 )
Bag 5 = 16 ( 2^4 )
Bag 6 = 32 ( 2^5 )
Bag 7 = 37 ( 100 - ( 1 + 2 + 4 + 8 + 16 + 32 ) )
Now suppose some one ask for 1 coin he can give Bag 1, for 2 he can give Bag 2, for three he can give Bag 1+ Bag 2,....for 53 coins he can give Bag 6(32) + Bag 5(16) + Bag 3(4) +Bag 1(1) and so on...
Wednesday, April 20, 2011
Puzzle 1
Solve this by using any arithmetic operation:
1 1 1 = 6
2 2 2 = 6
3 3 3 = 6
4 4 4 = 6
5 5 5 = 6
6 6 6 = 6
7 7 7 = 6
8 8 8 = 6
9 9 9 = 6
Solution:
( 1 + 1 + 1 )! = 6
2 + 2 + 2 = 6
3 * 3 - 3 = 6
√4 + √ 4 + √ 4 = 6
5 + ( 5 / 5 ) = 6
6 + 6 - 6 = 6
7 - ( 7 / 7 ) = 6
3√8 + 3√ 8 + 3√ 8 = 6
9 + 9 / √ 9 = 6
Linked List Puzzle
How does one find a loop in a singly linked list in O(n) time using constant memory? You cannot modify the list in any way (and constant memory means the amount of memory required for the solution cannot be a function of n.).
Solution
I first figured this out when I was asked the question in a interview, so there’s verification that one question in the book was from a real interview. The best part of this problem is that I actually needed to write the code out for it a little while ago to detect a bug.
One way to detect a loop is to iterate over the list with 2 pointers at the same time where one is iterating at double speed. If the 2 pointers are ever equal after they iterate once and before they both reach an end, there’s a loop.
Now for another puzzle.. I think the next question I had to answer was something along the lines of “OK, How do you remove a loop in a linked list with the same constraints?” The latter question definitely seems harder, but in the sequence of questions I’d already answered for the interviewer, the solution was pretty obvious. I’ll leave that solution to someone else today.
Solution
I first figured this out when I was asked the question in a interview, so there’s verification that one question in the book was from a real interview. The best part of this problem is that I actually needed to write the code out for it a little while ago to detect a bug.
One way to detect a loop is to iterate over the list with 2 pointers at the same time where one is iterating at double speed. If the 2 pointers are ever equal after they iterate once and before they both reach an end, there’s a loop.
Now for another puzzle.. I think the next question I had to answer was something along the lines of “OK, How do you remove a loop in a linked list with the same constraints?” The latter question definitely seems harder, but in the sequence of questions I’d already answered for the interviewer, the solution was pretty obvious. I’ll leave that solution to someone else today.
Saturday, April 9, 2011
Algorithm Speed in Practice
It's unlikely that you'll spend much time during your career writing sort routines. The ones in the libraries available to you will probably outperform anything you may write without substantial effort. However, the basic kinds of algorithms we've described earlier pop up time and time again. Whenever you find yourself writing a simple loop, you know that you have an O(n) algorithm. If that loop contains an inner loop, then you're looking at O(m × n). You should be asking yourself how large these values can get. If the numbers are bounded, then you'll know how long the code will take to run. If the numbers depend on external factors (such as the number of records in an overnight batch run, or the number of names in a list of people), then you might want to stop and consider the effect that large values may have on your running time or memory consumption.
The O() Notation
The O() notation is a mathematical way of dealing with approximations. When we write that a particular sort routine sorts n records in O(n2) time, we are simply saying that the worst-case time taken will vary as the square of n. Double the number of records, and the time will increase roughly fourfold. Think of the O as meaning on the order of. The O() notation puts an upper bound on the value of the thing we're measuring (time, memory, and so on). If we say a function takes O(n2) time, then we know that the upper bound of the time it takes will not grow faster than n2. Sometimes we come up with fairly complex O() functions, but because the highest-order term will dominate the value as n increases, the convention is to remove all low-order terms, and not to bother showing any constant multiplying factors. O(n2/2+ 3n) is the same as O(n2/2), which is equivalent to O(n2). This is actually
What Do We Mean by Estimating Algorithms?
Most nontrivial algorithms handle some kind of variable input—sorting n strings, inverting an m × n matrix, or decrypting a message with an n-bit key. Normally, the size of this input will affect the algorithm: the larger the input, the longer the running time or the more memory used.
If the relationship were always linear (so that the time increased in direct proportion to the value of n), this section wouldn't be important. However, most significant algorithms are not linear. The good news is that many are sublinear. A binary search, for example, doesn't need to look at every candidate when finding a match. The bad news is that other algorithms are considerably worse than linear; runtimes or memory requirements increase far faster than n. An algorithm that takes a minute to process ten items may take a lifetime to process 100.
We find that whenever we write anything containing loops or recursive calls, we subconsciously check the runtime and memory requirements. This is rarely a formal process, but rather a quick confirmation that what we're doing is sensible in the circumstances. However, we sometimes do find ourselves performing a more detailed analysis. That's when the O() notation comes in useful.
If the relationship were always linear (so that the time increased in direct proportion to the value of n), this section wouldn't be important. However, most significant algorithms are not linear. The good news is that many are sublinear. A binary search, for example, doesn't need to look at every candidate when finding a match. The bad news is that other algorithms are considerably worse than linear; runtimes or memory requirements increase far faster than n. An algorithm that takes a minute to process ten items may take a lifetime to process 100.
We find that whenever we write anything containing loops or recursive calls, we subconsciously check the runtime and memory requirements. This is rarely a formal process, but rather a quick confirmation that what we're doing is sensible in the circumstances. However, we sometimes do find ourselves performing a more detailed analysis. That's when the O() notation comes in useful.
Why Templates?
C++ requires us to declare variables, functions, and most other kinds of entities using specific types. However, a lot of code looks the same for different types. Especially if you implement algorithms, such as quicksort, or if you implement the behavior of data structures, such as a linked list or a binary tree for different types, the code looks the same despite the type used.
If your programming lan
What's the Best Buffer Growth Strategy?
When a object( consider object of String in this discussion ) runs out of room in its currently allocated buffer, it needs to allocate a larger one. The key question is: How big should the new buffer be? Note that the question is important because the analysis we're about to do holds for other structures and containers that use allocated buffers, such as a standard vector.
There are several popular buffer growth strategies. I'll note each strategy's complexity in terms of the number of allocations required, and the average number of times a given character must be copied, for a string of eventual length N.
- Exact growth. In this strategy, the new buffer is made exactly as large as required by the current operation. For example, appending one character and then appending another will force two allocations. First, a new buffer is allocated, with space for the existing string and the first new character. Next,
What About Accessors?
There are people who will argue that one-line accessor functions (like "X& Y::f(){ return myX_; }") are a reasonable exception and could or should be automatically inlined. I understand the reasoning, but be careful. At the very least, all inlined code increases coupling. So, unless you're certain in advance that inlining will help, there's no harm in deferring that decision to profiling time. If at that point, a profiler does point out that inlining will help, at least you know that what you're doing is worth doing, and you've deferred the coupling and possible build overhead until you know it will really matter. Not a bad deal, that.
Avoid inlining or detailed tuning until performance profiles prove the need. |
When and how should you decide to 'inline' a function?
The answer is the same as when to apply any other optimization: after a profiler tells you to, and not a minute sooner. There are a few valid exceptions to this guideline for cases in which you would reasonably inline right away, such as for an empty function that's likely to stay empty, or when you're absolutely forced to—for example, when writing a non-exported template.
Bottom line, inlining always costs
Does making a function inline increase efficiency?
Not necessarily.First off, if you tried to answer this question without first asking what you want to optimize, you fell into a classic trap. The first question has to be: "What do you mean by efficiency?" Does the above question mean program size? Memory footprint? Execution time? Development speed? Build time? Or something else? Second, contrary to popular opinion, inlining can improve or worsen any of those aspects of efficiency:
|
The Issue of Complexity
An oft-repeated criticism of C++ is that it is very complex. This is true. To get a balanced view, however, it's important to understand four things:
1. Complexity is bad when it One fundamental principle of C++ is that you don't pay for what
Advice From the C++ Experts: Be Const-Correct
Be const-correct: const and mutable are your friends. I still sometimes come across programmers who think const isn't worth the trouble. "Aw, const is a pain to write everywhere," I've heard some complain. "If I use it in one place, I have to use it all the time. And anyway, other people skip it, and their programs work fine. Some of the libraries that I use aren't const-correct either. Is const worth it?" We could imagine a |
Subscribe to:
Posts (Atom)