I would like to present to your attention a review dedicated to optimization of PNG and JPEG images without losing quality. Under “without quality loss” it is meant that there won’t be any visual difference between original and optimized images.
So, how the optimization occurs? Let’s sort it out in order, mostly it happens due to several reasons, and we’ll consider them.
Non-interlaced or Interlaced
There are two ways to display images in the browser when loading:
Non-interlaced – the browser loads them sequentially, from the top to the bottom, as new information from the network appears.
Interlaced – till the complete file download the file in the browser displays the image in low resolution, i.e. first you will see the image of poor quality, and then, as far as the graphical information is being received, the image quality will be improving gradually. Interlaced display helps to reduce the subjective download time and to show users that the image is being loaded, but interlaced display also increases file size.
In conclusion, I will give you a couple of links where ways to display images in the browser while loading are considered more precisely.
ColorType and BitDepth
ColorType is needed to optimize the number of colors in the image. According to this criterion there are the following PNG formats:
2. Grayscale + alpha;
3. Palette (256 colors);
5. RGB + alpha.
ColoreType technology chooses the format in which the image will weigh less, and won’t change visually. Here is an example of this technology (the images were optimized by the same algorithm):
PNG RGB + alpha — 17 853 bytes.
PNG Palette — 13 446 bytes.
The difference in size – 4407 bytes (24%), at the same time visual images have not changed. If you see different images, it’s an optical illusion.
BitDepth – the number of bits per pixel for images with indexed colors and the number of bits per sample for the black and white and full color images (24 bit). In an indexed image BitDepth can possess the values 1, 2, 4 and 8. In black and white – 1, 2, 4, 8 and 16. In full-color images without alpha data, as well as in black and white images with alpha data, BitDepth can only possess the values 8 and 16.
BitDepth technology is similar to ColorType. Here is an example of this technology (the images were optimized by the same algorithm):
PNG 4-bit — 6 253 bytes.
PNG 8-bit — 5 921 bytes.
The difference in size – 332 bytes (5.3%), at the same time visual images have not changed. If you see different images, it’s an optical illusion. Both technologies are supported by almost all image editors that are able to save in PNG, but few people know this, and therefore the developers of PNG optimization programs have to worry about it.
PNG consists of Chunks. I’m not going to write about it, I’d better show it with the help of TweakPNG program. Take any PNG image and open it through this program and you’ll see the whole structure of PNG.
There are also programs like TweakPNG, but this one is better and more convenient. In conclusion I’ll tell you about the other programs.
There are two types of Chunk:
• Critical chunks are present in any PNG-image (IHDR, PLTE for PNG Palette, one or more IDAT and IEND).
• Ancillary chunks are optional chunks, removing of certain chunks helps to reduce the size of the image, but not much.
It can be realized only in PNG Palette, the technology is based on chunks PLTE optimization, can reduce the image size, though not much. In my opinion best of all this technology is implemented in Color Quantizer, one of its algorithms was implemented in TruePNG.
Optimization of alpha channel
I’ve learned about this technique from Sergey Chikuyonok. Now this technology is developed and used very often and it gives a significant increase in optimization. The main drawback – the technology introduces changes in the image itself (in Chunks IDAT), but not in structure, but the visual image does not change. I can cite two programs as an example:
• TruePNG from the author of Color Quantizer;
• CryoPNG – more advanced optimization technology and requires more time, can increase the compression ratio.
I understand, that it’s hard to get what I mean, so it’s better to show an example (the images were optimized by the same algorithm, the first image with an alpha channel, the other with no alpha channel).
The original image. Size – 214,903 bytes.
CryoPNG (parameter-f0). Size – 107,806 bytes.
CryoPNG (parameter-f1). Size – 105,625 bytes.
CryoPNG (parameter-f2). Size – 107,743 bytes.
CryoPNG (parameter-f3). Size – 114,604 bytes.
CryoPNG (parameter-f4). Size – 109,053 bytes.
Lack of CryoPNG – it requires optimization of all five images to identify the best result, and it requires a lot of time. TruePNG works similar to CryoPNG-f0, in its turn CryoPNG-f0 is the best in terms of optimizing PNG (as they say it’s just experience). According to my observations, CryoPNG-f1 and CryoPNG-f4 are much better optimizing PNG CryoPNG-f0, compared with CryoPNG-f2 and CryoPNG-f3.
The compression algorithm Deflate + lines’ filtration
As we have said, PNG is consists of Chunks, in this case, we are interested in Chunks – IDAT. To compress it we need to take into account two factors, the filtration of lines and the compression algorithm Deflate. Let’s talk about this in more detail.
Filters that are used in PNG and they are needed in order to prepare the data for compression and thereby increase its effectiveness. The filter parses each line in a way that there won’t be any need to encode bytes’ values, but the difference between current and previous. It depends on the filter on what is considered the previous one.
• None – no filter;
• Sub – transmits the difference between each byte and the value of the corresponding byte of the prior pixel;
• Up – except that the pixel immediately above the current pixel, rather than just to its left, is used as the predictor.;
• Average- uses the average of the two neighboring pixels (left and above) to predict the value of a pixel.;
• Paeth- computes a simple linear function of the three neighboring pixels (left, above, upper left), then chooses as predictor the neighboring pixel closest to the computed value.
Generally speaking, there is no specific recommendation, what filter to choose. An intelligent encoder can switch filters from one scanline to the next. The method for choosing which filter to employ is up to the encoder.
There is another filter – Adaptive, we can say that it is the “mix” of filters. The filter supports almost all PNG optimization programs, but personally I know only two programs that have more advanced system for creating filters:
PNGOut doesn’t create such filters, but the new version has a support for built-in filters.
The compression algorithm Deflate
Today, there are several libraries that are based on compression algorithm Deflate:
|Deflate library||The speed of work||Compression ratio||Programs||Note|
|Due to the high speed can quickly go through a lot of value options and choose the best.|
|The settings selected in Zlib are not always optimal for them (close to optimal). Too many of parameter values will take a lot of time and almost always the time that was spent does not justify the result.|
PNGWolf uses both Zlib, and 7-zip.
Important: All these programs complement each other, and they are strong when united. This is the biggest problem when they are used separately, and then the results are being compared. First we need to use Zlib, and then 7-zip and / or Kzip.
Below is a dependence graph between compression ratio and time spent on optimizing Chunks IDAT.
As you can see from the graph, the greater the degree of compression, the longer it takes.
And some more …
Now we are going to talk about two programs:
I recommend using them at the end of PNG optimization and in the order I wrote before. They can reduce the PNG size to a few dozen bytes, and still the speed of work will be very high.
So now let’s move to the JPEG, here everything is much simpler. But before that, I would say that you cannot resave the JPEG without losing quality, even with a degree of the quality 100 (this is not the best, but the mathematical limit of optimization). Consider the following example (the images were optimized by the same algorithm).
Original image – 52 917 bytes.
The new image (saved in Adobe Photoshop CS5, Save for Web 100) – 53 767 bytes.
Let’s make up a diff-difference of the images.
This is how much the pictures have changed, visually it is not noticeable. As you can see the size of pictures has increased. This is due to the specifics of the library, which creates a JPEG.
There is only one program that allows you to resave a JPEG image without losing quality – BetterJPEG (there is a plugin for Adobe Photoshop). Let’s see how it works.
New Image (and we complicate the situation by adding the inscription «HTML»).
Let’s make up a diff-difference of the images.
Unlike PNG, JPEG consists of markers. The most powerful program for studying the structure of JPEG is JPEGsnoop. To study the structure of the JPEG image I recommend the following – PhotoME. Removal of some markers (APP0-APP15, COM), can significantly reduce the size of the image. I really like this program – Jhead, the most simple and convenient.
Progressive and Optimized
There are three ways to display images in the browser while loading:
• Standard. It’s almost not used, similar to the optimized method (less compression).
• Optimized – creates an improved JPEG file with a smaller file size. Browsers load them sequentially, from the top to the bottom, as new information from the network appears.
• Progressive – till the complete file download the image is displayed as a series of overlays that can display an image with lower resolution i.e. first you will see the image of poor quality, and then, as far as the graphical information is being received, the image quality will be improving gradually. Internet Explorer, including an eighth version does not support progressive JPEG download , it shows it only after fully downloading the file, which is very different from that of “traditional» JPEG, when an image is displayed from top to bottom, as it is loading.
JPEG creation library
And here comes the fun part. Has anybody thought about how to create a JPEG? It turns out that there are libraries and there are few of them:
• Adobe uses its own libraries, there are several, for example, Adobe Photoshop, Save for web – used for saving in the WEB.
• LibJPEG. Use almost all the programs that are able to save in JPEG, including Adobe Fireworks. The library has a very interesting LibJPEG program – JPEGTran (optimizes the image without losing quality.) It makes sense to use it if you create a JPEG image in Photoshop or Illustrator and optimize through JPEGTran, you get the maximum effect of optimization, because it appears that you have two libraries. Unfortunately, I have failed to get Photoshop to save a JPEG, without loss of quality.
Remember when we resaved image in JPEG, it turned out that the new image is larger than the original. This is due to the libraries’ specifications.
To tell which of two libraries optimizes better is very complicated and not unambiguous question , but due to very high speed of JPEGTran, you just do not notice how it works, just don’t forget to check whether the image size has increased or not, all this can be done via bat . Remember, JPEGTran can convert JPEG from Progressive in Optimized and vice versa, without changing the image.
Here you can get more information on how to display images in the browser while loading:
To make sure that the above methods actually optimize without loss of quality, take a look at the way of building diff-difference of the images.
Additionally the software that I recommend to use when studying the structure of images and more:
• PhotoME – very handy tool for studying the structure of PNG and JPEG, not as powerful as those described above, I use it as primary.
• ExifTool – very powerful tool for studying the structure of PNG and JPEG. Works from the command line, have an online version – Jeffrey’s Exif Viewer
• 010 Editor – Hex-editor, but has a complement (JPEG, PNG + Chunks), which helps to study the structure of PNG and JPEG, sometimes simply irreplaceable.
In the end, would like to tell few words about the online service for image optimization – PunyPng. I can’t say that it is perfect in terms of optimization, but today he is the best of those I’ve met.
To read original article in Russian click here.