The core of competitive programming is about inventing efficient algorithms that solve well-defined computational problems. The design of algorithms requires problem solving and mathematical skills. Often a solution to a problem is a combination of following well-known methods and new insights.
The main difficulty in the simulation problem is not figuring out how to solve it conceptually, but instead writing code that follows the (usually complex) set of rules they tell you. The challenge is writing this code quickly and accurately.
It is called a simulation problem because your code is simulating whatever situation is described in the problem (e.g. a no-choices card game, in one of the examples).
Complete search is a general method that can be used to solve almost any algorithm problem. The idea is to generate all possible solutions to the problem using brute force, and then select the best solution or count the number of solutions, depending on the problem.
Complete search is a good technique if there is enough time to go through all the solutions, because the search is usually easy to implement and it always gives the correct answer. If complete search is too slow, other techniques, such as greedy algorithms or dynamic programming, may be needed.
In almost all constructive algorithm problems, there are a lot of possible solutions, and usually any example which satisfies the conditions is accepted.
Since there are multiple solutions, you have to find a pattern that can be generated programmatically, and implement it. These problems are usually ad-hoc, with little knowledge of data structures and algorithms needed. People with strong mathematical creativity especially shine in this category of problems.
The normal strategy for these problems is to manually solve a few example inputs, and try to generalize your thinking process.
A greedy algorithm constructs a solution to the problem by always making a choice that looks the best at the moment. A greedy algorithm never takes back its choices, but directly constructs the final solution. For this reason, greedy algorithms are usually very efficient.
The difficulty in designing greedy algorithms is to find a greedy strategy that always produces an optimal solution to the problem. The locally optimal choices in a greedy algorithm should also be globally optimal. It is often difficult to argue that a greedy algorithm works.
Dynamic programming is a technique that combines the correctness of complete search and the efficiency of greedy algorithms. Dynamic programming can be applied if the problem can be divided into overlapping sub problems that can be solved independently.
There are two uses for dynamic programming:
A data structure is a way to store data in the memory of a computer. It is important to choose an appropriate data structure for a problem, because each data structure has its own advantages and disadvantages. The crucial question is: which operations are efficient in the chosen data structure?
It is a good idea to use the standard library whenever possible, because it will save a lot of time. All basic data structures are available in C++ STL like Stack, Queue, Deque (double-ended queue), Set, Map, PriorityQueue.
There are many more advanced data structures that are used in contests:
Many programming problems can be solved by modeling the problem as a graph problem and using an appropriate graph algorithm. A typical example of a graph is a network of roads and cities in a country. Sometimes, though, the graph is hidden in the problem and it may be difficult to detect it.
A majority of the Competitive Coding problems that you’ll encounter will have some mathematical logic or trick.
In programming, an n bit integer is internally stored as a binary number that consists of n bits. For example, the C++ type int is a 32-bit type, which means that every int number consists of 32 bits.
One of the most useful and effective low-level optimisations is bit manipulation or using the bits of an integer to represent a set. Not only does it produce an order-of-magnitude improvement in both speed and size, but it can also often simplify code at the same time.
Many algorithms can be optimized using bit operations. Such optimizations do not change the time complexity of the algorithm, but they may have a large impact on the actual running time of the code.
x & y
): Produces a number that has one bits in positions where both x and y have one bits.x | y
): Produces a number that has one bits in positions where at least one of x and y have one bits.x ^ y
): Produces a number that has one bits in positions where exactly one of x and y have one bits.~x
): Produces a number where all the bits of x have been inverted.x << k
): Appends k zero bits to the number.x >> k
): Removes the k last bits from the number.