The idea of divideandconquer
In computer science, divideandconquer (D&C) is an important algorithm design paradigm to solve some complex problems. The idea of D&C is that, sometimes it is quite difficult to directly solve a complex problem. However, if there is a reliable and more efficient way to solve a simpler problem of the related type of the original complex problem, it may be possible to solve the complex problem by breaking it down into several smaller problems of the same type, recursively do so until the problems become simple enough to be solved efficiently. To ensure the correctness of D&C, the subproblems must be able to be combined to return the correct solution of the original problem for each recursive step.
In the following, we explain each step in more details.
 Divide:
"Divide" is the step to break a problem down into smaller and individual problems. It is the most important part to discuss about recursion and D&C because "divide" must be the starting point to recursively solve the problem. An incorrect or thoughtless "divide" step can make the result wrong or return nothing because of nonbreaking function calls (just like stopandwait). To make decision on "how to divide a problem", it is necessary to make sure each smaller problem must be the same (or at least related) type of the original one.
 Subproblem:
Each divide step produces several smaller problems, and each of them is called a subproblem of the original problem. Recall that every subproblem must be the the related type of the original problem, that means the problem structures of them must be similar.
 Conquer:
After some divide steps a simple enough subproblem will be produced, and this problem can be solved directly and more efficiently. "Conquer" is the step to return the solution of such simple enough problem, or otherwise recursively solve every its subproblems preferentially. The case it is possible to directly return the result is called "base case", otherwise the case is called "recursive step" when it is necessary to make use of subproblems of problem itself.
 Merge:
"Merge" is the step to get the solution of a problem by combining at least two results of its subproblems (the subproblems can be the same). For most of the recursive problems, the "merge" step, if exists, should be the most difficult part to be thought out among all steps of D&C. It is because in most cases "merge" step requires calculation and analysis, and to ensure the correctness of a D&C algorithm, the programmer must gain a clear idea of this step.
Methods to implement divideandconquer
 Recursive Method:
If a problem is too complex, or the problem size is too large, then the problem will be split (or divided) again and again, until the details could be clarified. In this case D&C can also be called recursive method. Recursive method is opposite to iterative method, sometimes using iterative method (e.g. whileloop, forloop) is good enough to solve the problem, however, there are some differences between iterative method and recursive method, iterative method is to accumulate the value until the completion, but recursive method is to divide the problem into subproblems of the same type repetitively until fulfilling a base case.
 Recurrence:
In Mathematics, we encounter an idea of D&C in recursive relationship.
"Recurrence" is a phenomenon that the relationship between a problem and its subproblems are consistent and the problems with related problem structure occur repetitively in the recursive D&C process. From the mathematical point of view, the relationship between a problem and its subproblems is called "recurrence relation", and the mathematical formula to describe the recurrence relation is called "recurrence formula". By using recurrence formula, the solution will be clarified levelbylevel, the results of subsubproblems are used to return the result of subproblem, and also the results of subproblems are used to return the result of the original problem. In other words, once the recurrence formula of a problem is found, the problem can be solved, that also means, finding the recurrence of the problem is also one way for problem solving. However, for most of the complex problems, it is very difficult to discover the recurrences, in general to make use of D&C it is strictly depends on the experience and inspiration of the programmer.
 Mathematical Induction:
To prove the correctness of a D&C algorithm, it is usually done by using mathematical induction, based on the recurrence formula.

Recursion:
In CS, we implement D&C by recursion.
Recursion refers to a process that the functions continuously call themselves until the referenced values meet one of the given base cases. The hypothesis of using recursion is that the problem may be divided into subproblems, and the recurrences have been found. When using recursion to solve the problem, the programming code may be less than that when using iterative method in most cases, but the recursive method needs to allocate more stack memory because of the function calls.

Tail recursion:
"Tail recursion" is a special case of recursion, and it is a programming trick (or technique) to optimize the procedure. If a subroutine call inside another procedure produces a return value and is then immediately returned by the calling procedure, then this subroutine is called "tailrecursive". From the compiler optimizing poing of view, every tail recursion can be represented in an iterative format, since recursion may need to hold too much stack memory, it is recommended to use iterative method to take place of tail recursion to avoid stack overflow.
In the following, we give some example where a complex problem can be solved by D&C.
Some problems solved by DivideandConquer:
Some wellknown recursive problems:
