Color depth, also known as bit depth, refers to the number of bits used to represent the color of a single pixel. This value determines the maximum number of unique colors that can be displayed in an image.
The relationship between bit depth () and the number of available colors is exponential, expressed by the formula: . For example, a 1-bit image can only represent 2 colors (black and white), while an 8-bit image can represent 256 different colors.
True Color typically refers to a 24-bit depth, which allows for over 16 million colors (). This provides enough variety to represent photographic images with realistic gradients and shadows that are indistinguishable to the human eye.
The estimated file size of a raw bitmap image is calculated by multiplying the total number of pixels (resolution) by the color depth (bits per pixel). The resulting value in bits is usually converted into larger units like bytes, KiB, or MiB for practical use.
Formula:
Every bitmap file includes a file header, which is a block of metadata stored at the beginning of the file. This header contains essential information for the computer to render the image correctly, including the file type (e.g., .bmp, .jpg), the file size, the image resolution, and the bit depth.
Unit Conversion: Always check if the exam question asks for the file size in bits, bytes, or KiB. Forgetting to divide by 8 to reach bytes is a very common error that results in lost marks.
Color Depth Logic: If a question provides the number of colors needed (e.g., 50 colors), you must find the smallest power of 2 that is greater than or equal to that number (in this case, , so 6 bits are required).
Header Awareness: Remember that the calculated file size is often an 'estimate' because the actual file will be slightly larger due to the inclusion of the file header and potential compression data.
Scaling Effects: Be prepared to explain why a high-resolution image might still look 'pixelated' on a low-density screen or when zoomed in beyond its native resolution.