# algoritmo – Question about constant-time sorting algorithms

## Question:

I need to solve an exercise with the following statement:

As input we have a vector of n elements, where each element has a value between 0 and k . In addition we have two integer values a and b . We must answer how many elements in the input vector have value in the interval [ a , b ]. Design an algorithm whose complexity is O(1) for this problem.

I can't imagine any algorithm with constant complexity to solve this. The fastest ones I know (linear sort) don't meet because they have O(n) complexity. Would it be possible to solve this problem with O(1) complexity? Or is there an error in the wording?

I cannot give you a rigid mathematical proof, but I can safely say that what is asked is impossible. For any non-trivial case:

• `k < 0` (only admits empty vector with input, return is zero)
• `a > k` (return is always zero)
• `b < 0` (return is always zero)
• `b < a` (return is always zero)
• `a = 0 E b = k` (the return is always `n` – assuming the size of the vector can be determined in constant time)

It is possible to produce two distinct vectors:

• `V + [x]` , where `x ∈ [a,b]`
• `V + [y]` , where `y ∉ [a,b]`

which will produce different results if evaluated by this function.

Suppose there is an algorithm with `O(1)` complexity. So, for vectors of size greater than or equal to `n0` there is a constant `K` such that the running time is less than or equal to `K` . Let `V` be the vector (one of the vectors?) whose running time is the longest (ie `K` ), and whose return value is `R` . How would this algorithm evaluate `V + [_]` ?

• If it does not perform any additional operations to the ones it has already performed to evaluate `V` , then it cannot deterministically say whether the result is `R` or `R + 1` – since the additional element can be at least `x` or `y` ;
• If it performs one or more additional operations, then its running time is greater than `K` (assuming an operation has a running time greater than zero). That is, the algorithm does not have `O(1)` complexity order, as a presupposition.

By the way, this also means that – contrary to the assumption – vector `V` was not in fact the one with the "longest running time of all". In fact, such a vector does not exist – since the set of all possible vectors with elements from `0` to `k` is infinite (although enumerable). Any vector you choose as "the largest" can – as demonstrated – give rise to an vector whose running time is at least one elementary operation greater than it.

The best algorithm that solves this problem will have a complexity of `O(n)` . It is inevitable that all elements of the vector are visited at least once – either to compare, to include in a count, or to be the object of an arithmetic operation. Unless there's some "gotcha" I'm not seeing, this exercise has no solution…

Scroll to Top