WHY DO ELECTRONIC CIRCUITS USE BINARY CODE?

Although appliances containing electronic circuits can perform very complicated tasks and even appear to think for themselves, they are operated entirely by electrical current. This cannot “think” but it can be turned on or off, increased or decreased, or caused to change direction by electronic components. The activity in any one part of a circuit depends on whether electrical current is detected or not. This can be represented by a 1 if a current is detected or a 0 if it is not. Binary code uses only the digits and 1, so it enables an electronic device to perform calculations.

A binary code represents text, computer processor instructions, or any other data using a two-symbol system. The two-symbol system used is often “0” and “1” from the binary number system. The binary code assigns a pattern of binary digits, also known as bits, to each character, instruction, etc. For example, a binary string of eight bits can represent any of 256 possible values and can, therefore, represent a wide variety of different items.

In computing and telecommunications, binary codes are used for various methods of encoding data, such as character strings, into bit strings. Those methods may use fixed-width or variable-width strings. In a fixed-width binary code, each letter, digit, or other character is represented by a bit string of the same length; that bit string, interpreted as a binary number, is usually displayed in code tables in octal, decimal or hexadecimal notation. There are many character sets and many character encodings for them.

A bit string, interpreted as a binary number, can be translated into a decimal number. For example, the lower case a, if represented by the bit string 0100001 (as it is in the standard ASCII code), can also be represented as the decimal number “97”.

Picture Credit : Google