# How Does Analog Data Get Converted Into Digital?

1 views

If you think of data like text files, image files, and sound files, you always think of the representation of 0’s and 1’s, where everything is stored in the exact value. But did you notice that some of the data (like images and sounds) are continuous? Those are called analog data. In this article, we’ll find out how these analog data can be converted into digital data and stored in computers.

#### Approximation — the Nature of the Conversion

To convert a real number into a discrete value, you’ll approximate that number. In the vast majority of cases, an arbitrary real number consists of infinitely many non-repeating decimal places, which means you’ll need infinite storage to store that number precisely. But if you approximate that number and cut off the less significant decimal places, the storage needed becomes finite. This is the key to the approximation and thus the digitization process.

In analog data sources, everything consists of irrational real numbers. Specifically, in image data that we observe in real life, the positions and the colors of each position are continuous. In sound, the time and amplitude of each sound wave is also continuous. The approximation process is very simple (you just cut the data off to a specific number of “decimal places”), but it actually differs according to the file format and the requirements of the file.

#### Approximation at the Sensor

Everything in your computer processes things digitally, so these kinds of analog, continuous data can only be seen in the “outside world”, meaning that the conversion happens at the sensor. So what happens is that the sensor picks up continuous signals from discrete time or space intervals. This process is called sampling. Then, it approximates the detected value so that they are digitally sent to the computer. This is called quantization.

But what happens at the first step depends on whether you are collecting image or audio data. If you are taking an image, note that a camera contains light sensors in each pixel. The number of pixels (in the x and y-axes) depends on the camera’s resolution, and determines the dimensions of the final image. As the light sensors pick up photons, electrical signals are generated so that the corresponding signal (and thus the light level) can be recorded. Finally, the data is sent to the computer and encoded into the correct format.

But when you are recording sound, the sound detector takes a specific number of measurements every second, recording the amplitude of the signal there. The amplitude, which is continuous, is then converted into a number based on the required bit depth, so that an audio file can be produced from the data.

#### Bit Depths, Color/File Formats, and Sampling Rates

During the digitization and file creation process, there are a few requirements that we need to think about in order to produce images and audio with the most desirable quality while conserving computer storage.

Let’s start with bit depth, which applies to both image and sound. The bit depth setting allocates a certain number of bits per unit of measurement to limit the number of possible values the unit can take on. For example, the bit depth of an image controls the number of colors that can be represented in each pixel. In the RGB color format, a 24-bit color depth is often used. 8 bits each are allocated to red, green, and blue. This is often enough for display purposes, as the human eye usually can’t distinguish between adjacent colors in that scheme.

Then there are the color formats. Did you know that RGB is not the only color format that is widely used in image files today? There is also the CMYK (cyan, magenta, yellow, and black) color format, which is often used for printing. That could also affect the process, when data is encoded into the computer. File formats (and the compressions applied to them) also matter (e.g., bitmap (BMP) is totally uncompressed, but photos are commonly in formats like JPG and PNG, which are compressed), also being a major factor on the file creation process.

And finally, for audio recordings, there are the sampling rates. This determines how many amplitude measurements the sound detector takes every second. A common sampling rate is 44100 Hz (44100 samples/second). Usually, the human ear cannot perceive sounds with a frequency higher than 20000 Hz. Since the sampling rate must be more than twice the sound frequency (according to the Nyquist-Shannon_sampling theorem), the sampling rate must be higher than 40000 Hz for the highest audible frequency sounds to be recorded correctly. Thus, when converting audio and turning the analog data into a file, a sampling rate about this high is used.

#### Conclusion

In this article, we’ve mentioned how analog image and audio data gets converted into digital files by approximation (sampling and quantization), and the concerns and requirements that are built in into these conversion systems (e.g., image sensors and microphones). If you have any questions or suggestions about this article, please leave them in the comments below.