The Project File Details
1.1 Project Motivation
As a user of graphics file formats and conversion applications I have been interested in this
field since my interest in computing began. My own experiences of using graphic images for
course-work has led me to ponder many questions as to why there are so many formats and
methods for storing these images. This project has given me the opportunity to explore the
world of graphics files to find out the answers to my questions.
My knowledge of this field at the start of the project was casual. I knew generally about
bitmaps without knowing anything specific about the formats, compression techniques and
overall structure of the graphic images I was using. As this is a subject I am interested in
making my career in, measuring the ‘quality’ of images and how this can be affected by the
right or wrong choice of a file format seemed a natural choice of study which I knew would be
both challenging and interesting.
The learning curve embarked on has been considerably steeper than previous work I have
undertaken. The software component constitutes my first true software development
culminating in a final product. My previous knowledge of the C language did not cater for the
scale of this work, and my skills in Pascal, as used in Borland Delphi, were only of a basic
level. Through the development I have learnt everything necessary about these languages and
how they can be applied to creating file conversion software.
From the theory aspect, I have done much research into the principles of image storage and its
related areas including compression and decompression, colour spaces and conversion between
colour systems, image displaying, conversion between file formats and some advanced
techniques used to enhance compression ratios and allow such features as real-time full-motion
1.2 Aims And Objectives
The core objectives which have been designated as fundamental to the project are:
Identify, understand and describe a range of industry-based methods for quantitatively
measuring the quality of an image represented in various graphic file formats.
Information gathered from related industries as well as from other image processing
sources will be described with its relevance to this study.
Suggest methods for measuring an image’s quality in varying graphic file formats.
Using the information gathered as a base, I will build up my own ideas on ways ‘quality’
can be identified and measured fairly between different formats and techniques.
Research, understand and describe current popular static graphic file formats, the
compression methods utilised as well as colour spaces etc.
Emphasis will be on the common compression and decompression techniques used
widely, and how their use impacts the quality of the image representation, not just in
visual terms, but overall efficiency and suitability.
Gain an understanding of relevant advanced algorithm concepts, such as JPEG,
MPEG, and Fractal compression.
Although not covered in great detail, an understanding of these advanced representation
methods is useful in the context of the project.
Research Windows API programming.
Although the software will involve little direct API programming, it is useful to know
about the facilities and restrictions I will be working with.
Learn Borland Delphi and ObjectPascal.
To be learnt specifically for the project.
Use shareware JPEG and GIF encoding/decoding routines to create routines which
allow transfer to and from the Microsoft Windows BMP format.
The BMP format will be used as the central format by which the other supported formats
will be converted to and manipulated.
Write ZSoft PCX encoding/decoding routines to and from Microsoft Windows BMP
Along with the JPEG, GIF and BMP routines, a 16-bit Dynamic Link Library compatible
with Microsoft Windows 3.1 or greater will be constructed with high-level format
conversion routines accessible to external software.
Design and implement a user-interface with Borland Delphi which makes use of the
This will provide a front-end to the graphics library created in the objectives above. This
application will allow the conversion between JPEG, GIF, PCX and BMP formats.
In addition, the advanced aims which are desirable if time is permitting are:
Implement tools for clipboard transfer of image selections, as well as simple
manipulation tools covering fixed rotation (i.e. 90, 180 or 270 degrees), scaling,
horizontal and vertical axis flipping.
Of these extra utilities the ability to use the clipboard will increase the compatibility of
the application. Therefore, it is more important than magnification, rotation and axis-
flipping, which are not essential, but enhance the functionality of the software.
Construct an online help system within the software package.
Although this will mainly contain procedural information on how to use the application,
it would provide software testers with an instant information source if problems are
encountered using the system.
1.3 Report Structure
Chapter 2 introduces the major factors which bias the measuring of image quality, as well as
listing the industry sources used to collect information. My opinions on the information
described is contained in Chapter 3. File formats are discussed in Chapter 4, in general terms
with examples from file formats. In Chapter 5 I follow-up the work from the previous chapters
by suggesting methods in which image quality could be measured whilst avoiding the bias
factors mentioned in Chapter 2. Chapter 6 is described below. Finally, in Chapter 7, I
conclude by evaluating the work I have done, the problems I have encountered, the areas of
future work which could be done, and a self-appraisal of my success in attaining the objectives
and aims and overall management of the project.
The technical documentation for the software component of this project is contained in
Chapter 6. This includes the design principles, structure of the application, problems
encountered and details of how they were overcome. Specific details on how to use the
application can be found in the on-line help system available through the software. An
evaluation of my success in writing this software is contained in Chapter 7, as are future
improvements which could be made. Appendix 2 contains the source code of the application
written by myself. The entire source code is not included, as a majority of the library low-level
functions were taken from the previously mentioned shareware packages.
A project plan, in the form of a Gantt Chart can be found in Appendix 1. This plan outlines the
initial plan at the offset of the project. The evaluation in Chapter 7 discusses how reality has
matched up to the plan.
Description Of Current Image Quality Measures
2 Description Of Current Image Quality Measures
In order to fully appreciate the requirements of an accurate file format measuring system, it is
important to have details on the following:
Current methods used in industry for performing image quality measurement.
An understanding of the formats available.
Implementation details of the main formats used in representing an image.
The latter two will be covered in the following Chapters. This Chapter is concerned with
procedures used by organisations in industry which deal with the difficulty of file format
The first thing to ask is why is it necessary to measure them so accurately? Looking at most
images, one can usually tell which provides the best quality just by looking. The clear answer
here relates to perception. One person looking at a set of images in different formats may
think format A to be better than formats B or C because they can see the colour definition
better. Another person may disagree on the grounds that B is of a higher resolution, and is
therefore ‘better’. Yet another person could be colour blind, making the results even less
accurate and reliable. This is the first problem encountered: each individual has his or her own
unique perception. We cannot rely on a method whereby everyone involved could have
differing opinions. This does not help judge formats scientifically and fairly. Many factors
which are beyond our control affect the way we view image representations. Some of the
more distinct ones include:
Equipment – Using a low-quality monitor with a poor graphics card which can only
display, say, 16 colours at 320×200 pixels will place an unfair disadvantage on all the
formats involved in the test. Most importantly, however, will be the effects on
formats which have the test image stored as 24-bit and in a resolution of 1024×768
(format A in this example) or higher. The scaling down of the colours to those
available will give undesirable results and is likely to result in an unsuitable display.
Now if the image was displayed again from a format which can only store 256 colours
at 640×480 pixels (format B), the down-sizing and down-sampling required is less
drastic and hence, the displayed image will be closer to the actual file stored
representation. This example would give the second format B a clear and unfair
disadvantage. If state-of-the-art equipment was available for the test, the results
would obviously be turned around with the 24-bit high resolution format A utilised to
the full with the lower standard format B distinguished as a poor format for high-
Description Of Current Image Quality Measures
Human Vision – Many people require man-made aids to help their vision nowadays.
As we all are unique, the vision quality we each possess varies widely. This means
that we cannot rely on our own vision to systematically judge image representations.
Many of the formats of today can produce qualities so high that the human eye cannot
appreciate the detail level. As an example, experiments have shown that humans can
discriminate about 200 colours across the spectrum if placed next to each other
(Jackson, MacDonald and Freeman, 1994). This difference can be exploited by these
formats without decreasing the visual quality to the naked eye. Just looking at an
image will not necessarily enable us to notice differences which, being so insignificant
to the eye, are not identified by our visual system.
Environmental Conditions – Lighting is the main factor in this group. Our perception
of an image representation will be swayed to some degree by the lighting in the room
where the viewing is taking place. If it is a bright room and we have entered from a
dark room previously, it is likely our eyes will take a while to adjust to the new
lighting. This will play a big part when looking at the pictures on-screen. Other
factors such as noise and smell could also play a rôle, to a lesser extent, in that they
may affect the concentration of the viewer.
Viewer Bias – For one reason or another, an individual may have pre-conceived ideas
about which format they believe will perform better. This already places bias towards
the format before they have even been seen. Ensuring objectivity would be important
and difficult in such tests.
So if a fair method for measuring such quality is to be found, it cannot involve the use of
viewing the image with the naked eye. A scientific approach is required which filters out the
subjective, bias-factors described above.
2.2 Information Sources
To get an understanding of how these image format problems are circumnavigated in industry,
I have selected a range of relevant companies to approach and request information from. To
gain as wide a viewpoint as possible I have not restricted my information requests to any
particular type of industry. The organisations and individuals I have requested information
ASAP Inc. – Jeffrey Glover
Atlas Image Factory
BBC Television: ‘Sky At Night’ and Weather Centre
Centre of Medical Imaging Research (CoMIR) – University of Leeds – Nick Efford
Imaging Systems Lab; Centre for Imaging and Pharmaceutical Research (CIPR);
Magnetic Resonance Imaging Group (MRIG); Teleradiology & Medical Imaging,
Laser-Scan Limited; Visioneering Research Limited (VRL)
NASA Information Services; NASA Jet Propulsion Laboratory (JPL); NASA Remote
Sensing Unit (RSU)
National Remote Sensing Centre (NRSC)
Silicon Graphics, UK
United States Naval Observatory (USNO)
WXP Weather Project; National Center for Atmospheric Research (NCAR)
Description Of Current Image Quality Measures
I have requested information regarding the necessity of image quality and storage using the
more popular formats (or, indeed, any others used) with respect to their application. The
purpose of this is to paint a picture of the current state of the industry so I am able to form my
own suggestions as to how this could be done.
Three responses were received with regards to my information request.
1. Jeffrey Glover, of ASAP Inc., stated that if speed was more important for an
application (for example World Wide Web graphics) then a format is chosen on this
basis. As 24-bit colour at 1600×1200 resolution is rarely required for this application,
only disadvantages would result in its use. A majority of the World Wide Web user-
base would not appreciate the large graphic files and have no requirement for such high
For quality-critical applications, only lossless compression will do. The possibility of
losing some detail, even if it is too small to see with the naked eye, gives rise to
problems if the image is then processed and enhanced by a computer. Disparities and
noise could then be amplified to the extent of affecting the image visually.
2. The National Remote Sensing Centre, a company involved in the production of maps
generated from remote sensing scans taken by orbiting satellites stated that user
judgement is predominant. For the majority of their applications, colour plays a major
rôle. The example quoted involves infra-red scans of an area, whereby an experienced
user can map the outputted false colours from the scan to bands of infra-red intensities
using Erdas Imagine or ER Mapper. No scientific method is utilised to judge the
accuracy of the user’s decisions, or provide assistance along the way.
The file formats used by the NRSC also provided further information. As pointed out,
all formats which can handle 24-bits per pixel of colour information should be on a par
when representing the colour, in whatever form. The problem arises, however, when
the file needs to be interpreted by many applications on different platforms. Most, if
not all, applications of this type include their own built-in proprietary image format.
Transferring from one type to another can raise problems. Of course, plug-in filters are
available for most of these which allow the transfer to a common format suitable for all
involved platforms and software. In the experience of the NRSC these often fail to
sustain the quality required, and so are not used. As an aside, I too have noticed this
with certain pieces of software, such as early versions of Microsoft Word, which
includes a low-quality GIF import/export filter which is of little practical use for most
purposes. Instead, the images are stored in the application’s built-in format at the
NRSC which can be guaranteed to maintain the detail. As a consequence of this, if the
image is required in all the varying application-specific formats, the image processing
steps have to be carried out separately on each application. This is not a viable option,
due to the cost and time resources required, so is rarely undertaken.
Description Of Current Image Quality Measures
3. Nick Efford, of the School of Computer Studies at the University of Leeds, is involved
with The Centre of Medical Imaging Research (CoMIR). For storing their image
databases, they use lossless compression formats. They have not concerned themselves
on the issues of file formats as they feel it is convenient to purchase further storage
devices as required. In this case, because all of the images are stored lossless, there is
little point in analysing the differences between the formats as they will be almost
identical (except in the technicalities of the format layout). It must be noted at this
point that not all lossless formats are capable of storing 24-bit colour, GIF being an
example (see Chapter 4).
They also do research into object-tracking. Moving images are stored as lossy, as it is
more important to gain higher compression than better quality. All the concern with
this application is focused towards identifying the objects. Subtle differences in the
quality of the image from frame-to-frame is not important as only the object outlines
need to be recognised. The advantage of not requiring a lossless format for this
application is the large savings on storage space, and the use of a simple lossy
compression algorithm ensures the motion can be tracked in real-time.
When dealing specifically with medical images, the quality of the image is not
paramount. He says they are aware of the significance of high quality in this area, but
do little to ensure their images are of the highest quality as they feel it is not necessary.
From their point of view, the medical profession is very wary of imaging in general,
especially in the United States where concerns of lossy compression affecting patient
diagnoses are higher. In England, however, the majority are content with the quality
Personal Opinion On Image Quality
3 Personal Opinion On Image Quality
The best way to approach this is to analyse the responses I have received and described in
Chapter 2.3. From there, I can build up my own ideas and paint a bigger picture of how I feel
about the factors for and against this constantly developing field.
3.1 ASAP Inc.
I feel that Mr. Glover failed to answer the questions I put to him directly. His response was
somewhat vague and did not take into consideration the necessity for a scientific measuring
system which could be used and justified by any user, expert or otherwise. To some extent, I
agree with his notion that one can sometimes tell which format is most suitable for a certain
image and application. However, for this to be so requires knowledge of the available formats
and the application in-hand. For someone unfamiliar with either or both of these, visual
perception and knowledge of available formats is not a satisfactory method.
What is required is an unbiased system which can be used by anyone that allows the various file
formats to be graded against each other for a particular type of application. No knowledge of
any file formats or the application in-hand would be necessary, other than the basic
requirements of the system (such as medical, recreational and so on).
We cannot assume that the user will have any knowledge of the formats available, and
therefore most suitable for the application. In fact, the final results may be better if the user
has no understanding of such formats, as the removal of preference for one format may lead to
a better choice being made. Only a scientific, quantitative method can guarantee better results
every time. It is important to make the correct decision at an early stage, as it could be too
late if software for the application has been developed with a certain format in mind. If, for
example, it is later discovered that the chosen format will consume too much disk space per
image, or requires more processing power than can be harnessed in real-time, the system will
be severely restricted the longer it is used. Tough decisions would need to be made as to
whether or not it should be redesigned with a new format in mind, better equipped for the
tasks ahead. The system updates could be costly and time-consuming, especially if it has been
redistributed to many customers. All of these problems should never need to occur.
3.2 National Remote Sensing Centre (NRSC)
In this response the issue did not lie so much with ensuring image quality is of the optimum
level when stored, but converting between the myriad formats efficiently whilst maintaining the
quality. Until better quality plug-ins are made available which allow easier transference, this is
likely to remain a problem. Due to the wide range of facilities offered by different software
applications, to convert from one to another would usually require a filter to be specifically
built for that purpose. In some cases, it may not be possible to convert at all, if the features of
one package are not supported in another.
This is a fundamental problem in the image storage dilemma which can only practically be
solved through collaboration between the software manufacturers, ensuring file formats are
interchangeable. Currently there are already some alliances between players in this market, but
it is a far cry from the co-operation required to eliminate the problem. While this continues,
the pool of formats grows larger and the possibilities for conversion become endless.
Personal Opinion On Image Quality
In relation to the quality of an image, the NRSC usually relies on the capacity of its employees
to judge its quality. As the people involved with these remote sensing images are all well
experienced with the application, there would be little advantage to formally representing this
system. Situations like these demonstrate that a quantitative method for grading formats is not
always necessary. In some cases it would only cause problems, as all the relevant employees
would need training under the new system. Clearly, the kind of system I am suggesting is more
suited to scenarios where the users will have less knowledge of the overall system, or those
systems where the quality of an image is paramount and cannot be accurately judged by visual
3.3 Centre Of Medical Imaging Research (CoMIR)
This case highlights differing viewpoints. On the one hand, Mr. Efford is involved in the
Computer Science aspect, as a member of the academic staff at the University of Leeds.
Generally, the technical aspects of the technology used is of more importance, and how it can
best be improved and utilised to the full. On the other hand, his links to medical imaging
provide a perspective from the medical point of view which are more concerned with the
contents of the image, rather than how it was captured.
However you look at it, there will be a merging to some degree between them. From the
Computer Science stance, it is important to know what kind of things the images are
representing so suitable hardware and software can be used which copes with the demands.
Inversely, the medical staff must also know of limitations with the hardware and software
which may give rise the noise and artefacts on the images. The definition of this division
depends on the particular example, as it will vary with many factors, but in an ideal world, the
computer scientists should be aware of the application as much as the medical staff are of
With regards to the quality of medical images, the next step depends on what is to happen to
the images after they are acquired. If a doctor is to look at them and make a judgement, the
results would be less erroneous than if a computer was used to post-process and highlight
object outlines. The computer is much more likely to pick up on disparities and noise
generated by the compression technique, especially as it has little or no knowledge of the
human anatomy. In the design of the system this will be accounted for and acted on
accordingly, but the distinction must be made that this could vary widely. With images only
being viewed by the doctor, it is highly likely that lossy compression may suffice. Most of
these techniques are capable of significant compression ratios without affecting visual quality.
This may be all that is required, so lossless algorithms would be wasted, as well as storage
space. If the images were parsed through an image processing system to extract important
features, it would be vital that lossless techniques were used. After this stage, the doctor then
views the result, and a lossy method used for the final output.
On a different subject, motion tracking enables much of the detail in a set of images to be
disregarded. As we are only interested in the shape of the object, the cheapest possibility
would be binary thresholding. This would allow extensive compression which would make the
hardware and software requirements of a motion-tracking system less advanced. All the
processing power could then be handed to the artificial intelligence engine which processes the
motion. Image format choices would be far wider in this case, as virtually all of them are
capable of storing binary image data (although some are better suited).