page 1  (1 pages)

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

???

Eidgen ?ossische

Technische Hochschule

Z ?urich

Departement Informatik

Institut f ?ur

Theoretische Informatik

Renato Pajarola

Peter Widmayer

Spatial Queries on

Compressed Raster

Images: How to get the

Best of Both Worlds

October 1995

240


ETH Z ?urich

Departement Informatik

Institut f ?ur Theoretische Informatik

Prof. Dr. P. Widmayer

Renato Pajarola

Computer Science Department

Federal Institute of Technology

CH-8092 Z ?urich, Switzerland

e-mail: pajarolas@inf.ethz.ch

Peter Widmayer

Computer Science Department

Federal Institute of Technology

CH-8092 Z ?urich, Switzerland

e-mail: widmayer@inf.ethz.ch

This report is also available via anonymous ftp from ftp.inf.ethz.ch

as doc/tech-reports/1995/240.ps.

c 1995 Departement Informatik, ETH Z ?urich


Spatial Queries on Compressed Raster Images:

How to get the Best of Both Worlds

Renato Pajarola Peter Widmayer

Abstract

The maintenance of large raster images, such as satellite pictures in geographic information systems, is still a major performance bottleneck. For reasons of storage space, images in a collection are maintained in compressed form. An operation on an image can be performed by first decompressing the compressed version. This, however, can be a major source of inefficiency, because the entire image needs to be accessed even though a small part of it might suffice to answer the query.

We propose to perform spatial queries directly on the compressed version of the image. With current compression algorithms, this cannot be done efficiently. We therefore propose a new compression technique that allows for the subsequent use of a spatial data structure to guide a search. In response to a range query, our algorithm delivers a compressed partial image; this response may be sent over a network as it is, or it may be decompressed for further processing. We have implemented the new algorithm, and we have compared it with standard and sophisticated compression algorithms (that do not support spatial operations). Our performance evaluation with satellite images shows that the new algorithm not only supports spatial queries, but is also competitive in terms of the compression that it achieves.

1 Introduction

Geographic Information Systems (GIS) are penetrating a growing number of application domains, including cartography, urban planning, risk assessment, pollution control, and transport management systems. Since GIS users ask for complex operations with GIS data, efficiency is one of the main bottlenecks of today's GIS. In particular, we need data structures and algorithms for maintaining geometric objects on secondary storage in such a way that queries referring to the location of objects in space can be processed efficiently. Over the past decade, this problem has been attacked with significant effort, for geometric objects ranging from aligned rectangles to polygonal partitions, and for geometric operations that focus on range queries as a typical representative of geometric queries. From a practical point of view, we feel that it turned out that taking one out of a number of efficient spatial access structures is mandatory, but it does not matter so much which one.

The efficiency problem, however, is far more severe (in absolute terms) for raster data, because here, the data volume is far larger than for vector data. A single satellite image, for instance, with its multiple data channels and high resolution quality, is easily as large as several hundred megabytes. Unfortunately, spatial access to raster images has received relatively little attention. On the one hand, efficient access is possible by storing raster data in one of several data structures, for instance quad trees [12, 13], or by ordering the raster data according to expected access patterns [14]. On the other hand, efficient access alone is not what we need: due to the enormous data volume, we would like to compress the data as much as possible.


Over the years, quite a few data compression techniques have been developed that aim at eliminating redundancy from raster images [10]. For archiving a raster image or for sending an image over a network, compression is particularly useful. It has, however, an adverse effect on query processing: decompressing an image requires that the whole image is accessed on external storage; in addition, the decompression algorithm may take a lot of computation time. Whenever a query does not refer to the full image, some of this effort seems to be wasted. As raster images become larger and larger, partly because satellite snapshots can be glued together into one image easier and easier, partly because the resolution increases, it becomes less likely that even a significant part of an image is needed to answer a query. It seems crucial that in this setting, access to a small part of an image is supported on the compressed version of the image.

2 The Problem

In this paper, we consider the problem of answering a range query on a compressed version of a raster image, without decompressing the entire image. We restrict ourselves to lossless compression; our justification is theoretical interest as well as practical demands, such as the high cost in acquiring a satellite images. The range query serves as a particular, but prototypical spatial query. Ideally, we would like to find a raster data structure and compression technique with the following properties:

1. the physical clusters of the pixels of the raster image reflect the pixels' locations in space; this implies that pixels in the same storage block are close in space;

2. a range query can be answered by accessing just those storage blocks that contain the compressed version of the data in the range;

3. the compression ratio is as good as that of any other well-known image compression algorithm.

Property 1 is interpreted to imply that the geometric shape of the union of all pixels in a block (or some container for the pixels, such as the bounding box) is compact and good for efficient spatial indexing; see, for instance, the criteria for efficiency of spatial access structures based on aligned rectangles ([2, 11, 1]).

These properties together call for a solution that combines the characteristics of the best available spatial data structure with those of the best available compression algorithm. The problem now lies in the fact that the compression technique must lend itself to spatial clustering. Any compression technique, for instance, that makes use of the statistics of the whole image will be unable to fulfill requirement 2. Unfortunately, this is just what the best compression techniques do. On the other hand, we can hope that by exploiting local (w.r.t. a single storage block) statistics for compression, the compression ratio might be quite good.

We propose a solution to the compressed raster image handling problem that satisfies all of the above requirements. In Section 3, we discuss the basic issues in the design of a suitable compression method. We propose a compression algorithm in Section 4. Section 5 sketches the spatial access to the compressed image. In Section 6, we present the experimental results that we gained from an implementation of various compression algorithms and experiments with satellite images. Section 7 concludes the paper.

4


3 Basics of Compressed Raster Image Handling

To combine compression with spatial access, two clustering strategies seem applicable at first sight: a partition of the data space based on the original pixels, or a clustering based on the compressed data. The former one may use any partition of the data space into rectangular regions, where each region serves as the two dimensional key to the data block(s) holding the compressed image data of the region. The second strategy builds its regions according to the amount of compressed data that fill one or more entire data blocks. However, any fixed partition of the space of original pixels leads to regions whose compressed data may not fill entire data blocks, because the compression ratio is not uniform all over the image (see Figure 1). Therefore, in a case in which the compressed data fit into one block, two blocks may need to be accessed. Some (but not all) of this inefficiency can be avoided by scanning the image along an appropriate Space Filling Curve (SFC) [4, 8] and creating index regions { bounding boxes of data blocks { which cover the area of the compressed data in one data block.

Image Disk
Disk BlocksData Blocks

Figure 1: Logical Clustering

To permit efficient clustering, spatial access and lossless compression at the same time, an algorithm and data structure have to obey several restrictions. First, there should not be any free space in a data block, because this would affect the space usage and therefore the compression ratio negatively. Second, to keep the number of block accesses for (range-) queries low, every index region should smoothly map to a data block, and therefore the compressed data should not continue beyond data block boundaries. Third, the compression algorithm is not allowed to take the statistics of pixels encountered in other data blocks into account when coding the elements of the current block's region. The reason is that this would lead to some extra block accesses for reconstructing this part of the image. Therefore, we restrict the compression scheme to local operations and disallow the use of statistics or history of other image parts.

Unfortunately, none of the known compression algorithms fulfills our requirements, neither the highly sophisticated nor the more standard ones. The newly proposed (highly sophisticated) standard CALIC [17] achieves high compression ratio based on complex prediction functions and extensive context statistics; for our problem, this collides with the requirement of history-less operations. Another sophisticated method called FELICS [6] is also based on image statistics, and the current standard JPEG [15] has drawbacks in the need for prescanning the whole image to achieve good compression, and that a certain amount of statistics be maintained and stored with the compressed data. Furthermore, the prediction functions of CALIC, and more or less also those from JPEG and FELICS, depend on the knowledge of some particular neighboring pixels; this imposes restrictions on the traversal order of the pixels in the image, and it therefore makes the construction of well-shaped pixel regions impossible.

5


4 A Spatial Compression Algorithm

As discussed in [5], lossless image compression algorithms usually view a pixel value as a random variable. An algorithm then typically consists of four processing steps: first, select the location of the next pixel to be encoded (pixel selection); second, compute a prediction of the value of the selected pixel from the history of already encoded pixels (pixel value prediction); third, compute a prediction for the variance of that random variable (variance estimation); fourth, encode the difference between the pixel value prediction and the actual pixel value, making use of the variance estimate (prediction error coding). The requirement to do dynamic clustering based on the compressed data as mentioned in Section 3 leads to some restrictions on the pixel sequence. The scan-line sequence, as used in most compression algorithms, leads to poor spatial selectivity of the data blocks compared to more sophisticated space filling curves. Figure 2 shows the resulting data regions for three different pixel sequences with equal block capacities of 17 pixels; the first four resulting data regions (A, B, C and D) are shown in different colors. The Z-curve (Figure 2 a), which can have heavily overlapping regions (i.e. A and B), and scan-line (Figure 2 c) sequences may even produce data regions with multiple connected components. For the pixel sequence we propose to use the Hilbertcurve, since it generates quite compact data regions, and it also supports the prediction step quite well.

117 1 17
34
34
51 51
68 68
1 17345168
a) b)

c)
Block ABlock B Block CBlock D

Figure 2: Clustering Sequences

Prediction is usually based on neighboring pixels in space. However, the pixels used as predictors for the current value have to be encoded before the current one, because the decompression process has to use the same predictor values. If the relative spatial positions of the neighbors always have the same relative index in the sequence, as it is the case for

6


scan-lines, sophisticated combinations can be used to derive a close prediction (CALIC makes use of this fact, e.g.). Space filling curves like the Z-curve or the Hilbert-curve don't conform to this behavior but are useful for effective clustering. We therefore propose to calculate prediction from a specific set of relative indices (in the pixel sequence) of previously encountered pixels. However, the relative spatial positions of these pixels are not the same throughout the sequence. Furthermore, their influence based on the Euclidean distance differs from one space filling curve to another. Let us call the set of pixels used to predict the current pixel's value the context.

We propose to choose the context Cv of a pixel v to be the ordered collection of the last k pixels visited before coding the current one. We now face the problem of computing a prediction that is good in all the geometrically different situations that the context describes. To get a close prediction ^v, we propose the weighted sum of equation 1 over the context Cv; this incorporates neighboring pixels according to the space filling curve (SFC), but doesn't rely on exact relative locations. The weights wj resemble an exponential function, decreasing with growing distance on the SFC, and depend on the size k of the context and on the index j of pixel vj in the sequence. Thus the distribution of the weights wj takes into account that pixels close in space which also have smaller relative indices on the SFC will also have a bigger influence on the prediction.

w1 = 1

wj = 3bwj?1
2 c + 2

^v = (
kX

j=1
wj)?1 X

vj2Cv
wjvj (1)

To achieve good data reduction, an accurate probability distribution has to be selected for the current prediction error. The Laplace distribution of equation 2 is normally used as a basis for statistical coding of differential signals [10, page 225]. In this case, the best compression ratio for a value x is achieved with the Laplace distribution with variance x2. This value, however, cannot be used for compression, because it makes decompression impossible, since the value x isn't known at decompression time. Therefore, we need a good estimate for the variance.

L(x) = 1
p2oe2 exp(?

r 2
oe2 jx ? ?j) (2)

Naturally, our variance estimation uses the same context Cv as the prediction step. Unlike other compression algorithms, however, we do not use this context as a key to retrieve information about earlier prediction errors within the same context. Thus no history or statistical information of occurrences of this context are used for variance estimation; we only use local operations on the values in Cv. We propose to calculate a variance approximation oe2

according to equation 3 as a normalized sum of the quadratic prediction errors in the current context Cv.

oe2 = 1
jCv j

X

vj2Cv
(vj ? ^v)2 (3)

The prediction errors e = v ? ^v can now efficiently be coded with a statistical entropy coder, using the estimated variance oe2 to calculate the probability distribution at the current

7


position. Note that arithmetic coding [3] would attain the theoretical entropy bound of the compression ratio for a given probability model. However, the lack of an exact link between the symbols of the input sequence and the bits of the output binary rational number would interfere with the division of the compressed data into many different data blocks. A Huffman entropy coder [7] generates minimum redundancy codes which have a one-to-one relationship with the input symbols, and it also approximates the entropy bound for a given model. To reduce coding complexity, the Huffman tables are only computed for a specific set of variances, as suggested e.g. in [5].

Figure 3 shows these processing steps in a flowchart. Prediction and variance estimation are performed on a sliding window over the input sequence, where only local operations on a limited support area around the current picture element are used. The entropy coder which encodes the prediction errors makes use of precalculated tables to save calculation time. The decoding process is the symmetrical inversion of the encoding process. For ease of reference, let us call the described algorithm Hilbert Compression.

Update ContextWindow
VarianceEstimation
Prediction Error Coding
Hilbert Curve