Heap Ltd.

Evgeniy and Alex examine various existing sorting and searching algorithms, then present their "limited heap," which arguably provides the best tradeoff between speed and memory utilization.


June 01, 2003
URL:http://drdobbs.com/architecture-and-design/heap-ltd/184405363

Jun03: Algorithm Alley

Evgeniy and Alex are Ph.D. students in Computer Science at the Technion-Israel Institute of Technology. They can be contacted at [email protected] and gsasha@cs .technion.ac.il, respectively.


The Heap Data Structure


When you perform a search on a web site, you usually want to see the most relevant results first. To do this, the search engine assigns each search result a relevance score, and ultimately returns 10 or 20 highest scoring results.

Selecting several best elements is obviously not limited to the Web. When monitoring system health, for instance, you might want to find the 10 biggest files on a given disk. Or in the realm of genetic algorithms, during each phase, you might choose a certain number of the fittest organisms to form the next generation.

While finding a single best element in a sequence is straightforward, selecting k best elements is a tricky business. There are several solutions, each with different performance characteristics. The trade-off between time and space is not trivial, and the most practical algorithm must be carefully selected.

What is the best that you can expect from a good solution? On the one hand, each element must be examined at least once, so the time complexity can be no lower than O(N). On the other hand, the resulting k best elements need to be stored; hence, the memory complexity can be no better than O(k). The problem is that you cannot have both time and memory complexity low at the same time.

In short, you often need to select a number of best elements from a sequence of values. This problem is not new, and several algorithms have been developed to address it, each having different time complexity characteristics. In this article, we'll examine the various existing algorithms, and then present one called "limited heap," which arguably provides the best trade-off between speed and memory utilization.

Solution #1: Sorting

A naive way to select the k largest elements is simply to store the entire sequence in memory, sort it in decreasing order, then return the first k elements from the sorted sequence. Coding this technique is straightforward since sorting algorithms are part of the Standard Library in almost all languages.

Time complexity, however, is another story. O(N*log(N)) is mediocre. In addition, storing the entire sequence consumes O(N) additional memory. When N is very large—especially when the elements are produced on the fly and need not be permanently stored otherwise—this technique may impose an unreasonable burden on memory usage.

Solution #2: Heapsort

A better approach is to use a variant of heapsort. The original heapsort algorithm (see Introduction to Algorithms by T.H. Cormen, C.E. Leiserson, R.L. Rivest, and C. Stein; MIT Press, 1990) collects all the elements in an array, rearranges the array as a heap (this can be accomplished in O(N) in the worst case), and then extracts the largest element from the heap N times (this amounts to O(N*log(N)) since the heap property needs to be restored after each extraction). In our case, we only need to perform k extractions to obtain the k largest elements; hence, the overall time complexity is O(N+k*log(N)).

Still, this solution suffers from the same drawback as the previous one, as it requires O(N) additional memory to operate the heap that initially contains the entire sequence. (For background information on heapsort, see the accompanying text box entitled "The Heap Data Structure.")

Solution #3: Limited Heap

When the number of elements in the sequence (N) is huge, you'd rather not store them all merely to select the k largest ones. You can, however, minimize the memory requirements, using only O(k) additional memory to store the k elements requested.

To do this, use a limited heap, which cannot grow beyond k elements. You sift the entire sequence through it, while at any given moment, the heap stores the k largest elements seen so far. New elements are only inserted if they are larger than the current smallest element in the heap, in which case, they replace the latter, and the heap size never grows beyond k.

Ideally, you would like to sift all the sequence elements through the heap one by one, removing the worst element whenever the heap size becomes larger than k. However, while the heap provides easy access to its best element, removing the worst element is more complex and can require as many as O(k) operations. To circumvent this problem, observe that during the selection process you do not need access to the best element—only to the worst. Thus, you reverse the heap order, so that the heap root always contains the current smallest element. In fact, you maintain a "min-heap," even though what you really want is to select the k largest elements.

At the steady state, when the heap contains k items, determining the value of the smallest one takes O(1) (due to min-heap ordering). Whenever applicable, replacing the smallest element takes O(log(k)); thus, the overall worst-case time complexity is O(N*log(k)). Since the heap is limited, it only keeps as many elements as are eventually required; hence, the additional memory complexity is only O(k)—a substantial savings when k<<N!

Implementing this algorithm is straightforward. Reversing the order of elements in the heap means you need only override the comparison operation it uses. In C++, the comparison predicate is a template parameter to the heap operations, so the algorithm can be readily implemented using the heap manipulation functions from the C++ Standard Library.

Listing One is C++ code that implements the limited heap (template class KMaxValues). The class inherits from std::vector, and capitalizes on its container functionality. To make the template more generic, allow elements to carry a payload in addition to their values (eliminating the payload when it is not necessary should be straightforward). The complete source code that implements the limited heap is available electronically; see "Resource Center," page 5.

The limited heap interface is provided through a constructor, a push_back function that sifts new elements through the structure, and (constant) iterators that allow access to individual elements. All the rest of the original vector functions are declared private, so that their inadvertent use does not invalidate the heap property. Most of these functions, such as erase and pop_back, are actually inapplicable to the heap structure. The only notable exception is operator[], whose nonconstant version may render the heap inconsistent; we block it altogether as it is impossible to selectively allow only the constant version.

Whenever the push_back function adds a new item to the heap, it first uses std::pop_heap to pop off the smallest element. The way pop_heap is implemented, it moves the first (that is, smallest) heap element to the last position (namely, vector[_n-1]), then restores the heap property. Subsequently, the new value is injected into this last position; using vector::push_back would both unnecessarily grow the vector and include anew the element that has just been removed by pop_heap. Ultimately, std::push_heap is invoked to include the newly added element into the heap.

For various combinations of k and N, solution #2 (heapsort) may be asymptotically more time efficient than #3 (limited heap), but the modest memory requirements of the latter can hardly be beaten. (Mathematically inclined readers will find out by complexity comparison that the former solution is preferable when k/(log k-1)<N/logN, or roughly when k<<N). Also, in both solutions, elements are extracted from the heap in a sorted order. This might be a benefit (if sorted order is actually desired), or a drawback (if stable operation is necessary; reminiscent of stable sort, stable operation in this context simply preserves the original relative order of equivalent elements). In the latter case, the payload fields may be utilized to track the original element ordering.

Other Options

At this point, you probably start wondering, "Is the limited heap algorithm optimal? Is it the best we can hope for? Can you really achieve the O(N) lower bound necessary for scanning the input sequence?" It turns out that there are algorithms that can do this (alas, have we mentioned that there's no such thing as a free lunch?).

There are so-called selection algorithms that find the kth largest (single) element in a sequence. In computer science, a closely related value of the kth smallest element is referred to as the kth order statistic of a sequence. Given such an element, the sequence can be trivially partitioned (in a single O(N) pass) so that all the k largest elements are grouped together. The problem, however, is that algorithms that work reasonably well on average might require as much as O(N2) time in the worst case, while algorithms with guaranteed O(N) selection time are extremely slow in practice. In either case, O(N) additional memory is required to store the entire sequence—a considerable drawback when working on the fly.

Conclusion

Time complexity is important in choosing an algorithm. However, in real life, you should not blindly select the algorithm that advertises the best complexity. Other concerns, such as memory requirements, can often make a seemingly inferior algorithm preferable in practice.

DDJ

Listing One

using namespace std;
template<class _Key,class _T> class KMaxValues:public vector<pair<_Key,_T> > {
   typedef vector<pair<_Key, _T> > Base;
   int _n; /* maximum size allowed */
   /* Block access to extraneous functions inherited from std::vector,
      lest their invocation might invalidate heap properties. */
   using Base::assign; using Base::erase; using Base::insert;
   using Base::pop_back; using Base::resize; using Base::swap;
   using Base::operator[];using Base::iterator;

   struct greater : public binary_function<value_type, value_type, bool> {
      bool operator()(const value_type& x, const value_type& y) const
         { return (x.first > y.first); }
    };
public:
   explicit KMaxValues(int maxSize = 1) : _n(maxSize) 
      { reserve(maxSize); /* preallocate storage */ }

   void push_back(const value_type& x) {
      if (size() < _n) {
         Base::push_back(x);
         push_heap(begin(), end(), greater());
      } else { /* maximum size reached */
         if (x.first < begin()->first)
            return; /* no need to add the element at all */
         /* delete the smallest element, then add the new one */
         pop_heap(begin(), end(), greater());
         (*this)[_n-1] = x; /* inserts the new element into last position */
         push_heap(begin(), end(), greater()); /* restore heap property */
      }
   }
};

Back to Article

Jun03: Algorithm Alley

Figure 1: Tree property.

Back to Article

Jun03: Algorithm Alley

Figure 2: Restoring the heap property. Node 5 must be moved since it is larger than both of its sons. It switches places with its smallest son, in this case, the left one. After the swap, node 5 is smaller than both of its sons (6 and 8), and the heap property is completely restored.

Back to Article

Jun03: The Heap Data Structure

The Heap Data Structure

Heap is a simple but useful data structure that lets you enter a sequence of elements in an arbitrary order, then retrieve them one by one in a sorted order. There are naturally other data structures that can perform the same task, but heap is arguably the simplest, most efficient, and easiest to implement.

Basically, heap is a binary tree with two special properties:

When a heap is embedded in an array, you can define its last element as the very last element of the host array. This element can be safely removed from the heap without disturbing any of its properties. This tactic is what lets you efficiently extract the smallest element from the heap—you simply remove the last element from the heap, inject it in place of the smallest element (at the root), and bubble it down, restoring the heap property for every node it encounters en route; see Figure 2. The time complexity of this operation is bounded by the tree depth and equals O(logn).

To insert a new element into the heap, you just insert it after the last array element and bubble it up similarly to restore the heap property. The complexity of this operation is also O(logn).

Finally, an unsorted array can be converted into a heap ("heapified") by performing the bubble operations in a particular sequence. Interestingly, while the worst-case complexity of any given bubble operation is O(logn), all the operations required for heapification together sum up to only O(n)!

And the best news is that you don't even need to implement any of the heap manipulation functions, as they constitute an integral part of the C++ Standard Library. The three heap operations described here are realized by the STL functions std::pop_heap, std::push_heap, and std::make_heap, defined in the header file <algorithm>.

—E.G. and A.G.

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.