: Number of bits for color processor


Peterfreedom
2009-08-12, 08:56 AM
Hi! Some HDTV Manufacturers are showing 10 bits of processing for each color meaning that it only results in 1024 combination of one color. Some others have 12 bits (4096 variation) and 16 bits (65536 variation). I remember comparing a Pioneer Elite with a Panasonic both Plasma a year ago and I could see the difference when very little difference in shade or in sky.

Does having a lower number of bits is the way to increase the speed of processing (240 Hz)? Do any of you see the difference between a 10 bits system and a 16 bits?

Thank you all.

jvincent
2009-08-12, 09:38 AM
More bits should always be better.

The biggest effect is that when the image processing is done by the TV if it doesn't have enough bits rounding errors in the math used accumulate and are visible.

10 bits is certainly the minimum you would want since the native data is carried as 8 bit codes for RGB.

Using a lower number of bits makes the chips that do the processing cheaper to implement.

JamesK
2009-08-12, 10:56 AM
A couple of points here:

1) The law of diminishing returns kicks in and further increase in bits has no noticable effect

2) Processing should be done with more bits than the input & output, to avoid round off errors.

Peterfreedom
2009-08-12, 02:15 PM
10 bits is certainly the minimum you would want since the native data is carried as 8 bit codes for RGB.

So does it means that Bell ExpressVu or StarChoice signal has only 8 bits to determine each color level and having a 18 Bits HDTV will not improve anything?

Flummox
2009-08-12, 04:26 PM
HD and bluray are essentially 8 bit. Studios may work with the content at 10 or more but they go over the air or onto the disk at 8 (there is actually a lot of tech and standards here ex: http://en.wikipedia.org/wiki/Rec._709#Primary_chromaticities but this super simplifies).

Having internal processing at higher levels can help with rounding errors as mentioned and with image processing to avoid some annoying effects in dark or similarlly coloured scenes (sky, smoke, shadows). But this is "post" processing by your tv or device. And the quality is more dependent on the chip and tech it supports rather than raw colour bit depth.

But there are colour spaces (XvYCC) and bit depths (HDMI "Deep Colour") that would allow more colour depth to be sent/carried - just nothing consumer level other than a few video cameras are using it yet. so having "more" bits may future proof you some. But then again - it may not.

jvincent
2009-08-12, 04:28 PM
Simplifying quite a bit, the final decoded output for HD video is 8bits each for R,G,B. This is a restriction from the source material.

Strictly speaking anything above 12 bits is overkill.

Tezster
2009-08-12, 04:49 PM
Simplifying quite a bit, the final decoded output for HD video is 8bits each for R,G,B. This is a restriction from the source material.

Strictly speaking anything above 12 bits is overkill.
That's how I tend to think of it as well. All your source material is going to be 8-bit for the forseeable future, so a few bits over that to allow for accurate interpolation/conversion is fine, but I don't see the point of uber-high internal processing at anything over 12 bits as being useful.

I think this article (http://www.audioholics.com/tweaks/calibrate-your-system/hdmi-black-levels-xvycc-rgb) does a good job of explaining some of the concepts.

JohnnyG
2009-08-13, 04:57 PM
This "8-bit" colour is not the same as 8-bit colour depth on, say, you PC. This "8-bit" or "10-bit" is in reference to the sampling size of the analog waveform of each component of the video signal.

Increasing the sample size beyond that of the source material is only - potentially - going to give you a marginal improvement, mostly in fine gradients. I certainly wouldn't choose my TV over this as there are many more important factors to image quality and accuracy.