16 bit color imitating 8 bit color

CS 105 Gossip Corner: Discussions related to homework assignments: Ex 1: Digital Representations: 16 bit color imitating 8 bit color
By
Powers, Eric C. (02ecp) on Saturday, October 3, 1998 - 02:39 pm:

This confuses me I believe, since I do not totally understand how 8 bit color is made using a palette and why certain numbers of bits are used for each pixel. Basically, I do not see the differences that I should be coming up with between 16 bit mimicking 24 bit and 24 bit to begin with and 16 bit mimicking 8 bit and 8 bit to begin with. Does anyone know how to explain this information better?


By Tom Murtagh (Admin) on Monday, October 5, 1998 - 12:29 am:

The "palette" is just a list of the colors that actually appear in an image. If this list is less than 257 entries long, each entry can be uniquely identified by an 8-bit binary number that corresponds to the color's position within the list. Given such a palette/list, each pixel within the picture can be described by giving the position of its color within the palette. If the position of every color can be expressed in only 8-bits, then we can get away with 8-bits per pixel, except....


We also need to include some representation of the palette itself if we are trying to describe the picture to someone else (i.e. another computer). Each color within the palette must be described in a way that another computer can understand. The easiest thing to do is to fall back on the standard of describing colors by giving values describing the amount of red/blue/green in each color. 24-bits (8 bits per red/blue/green) are required to describe a color in this way. So, the palette itself will involve 24 bits (= 3 bytes) for each color.

Hope this helps,

Tom


Add a Message


This is a private posting area. A valid username and password combination is required to post messages to this discussion.
Username:  
Password:
Post as "Anonymous"