Exploring the Basics of Computer(Part-01) : How 0's and 1's Power the Digital World

Computer’s Feeling- The computation of 0s and 1s

·

6 min read

Exploring the Basics of Computer(Part-01) : How 0's and 1's Power the Digital World

In this contemporary world, Everyone is literate about the fact that computers only understand 0 and 1. Whenever someone asks us about how a computer runs? they simply say it runs by using a 0 and 1 ... binary system... blah blah and our response to their most curious question ends without any justifying answer. Isn't that sad, cruel, despotic and evil? Sorry just kidding hehe. But in this blog, I have found all answers to your question mark. Let's get started and deep dive into it from the very beginning.

We all know that In Mathematics there are 0,1,2,3,4,5,6,7,8,9 and numbers keep repeating in certain sequences and order like 100, 1000, 98,87 and so on. But have you ever wondered why computers only understand 0s and 1s out of so many numbers? more specifically computer uses a binary system so the main question is Why does a binary system uses 0 and 1?

Let me take you to the basics and most fundamental parts of computers. The most fundamental thing needed for computers to run is electricity. Whenever the electricity flows through the wire, the behaviour shown by the electricity is ON or else OFF. In simple words, the flow of electricity is ON else it is OFF. So, we have only two activities shown by electricity. These two activities by convention are denoted as 0(zero) and 1(one) for OFF and ON respectively. It is a simple and efficient way to represent information using the two basic states of an electronic circuit, which is the foundation of all digital technology today.

In an electronic circuit, the flow of electricity can be either on or off, which can be represented by a 1 or a 0, respectively, right. By using only two states, a binary system can represent any type of information using a combination of 0s and 1s, also known as "bits". This makes it easy for computers to store and process large amounts of information in a relatively small space, and it can be done very quickly and efficiently. By the way, Bit is a different topic let's leave it for another day.

Coming back to the topic, the binary system is easy to understand and implement, which makes it a popular choice for digital systems. It also allows for easy error detection and correction, as well as easy data compression in comparison to other numbers(Visualize back in those days finding bug in black&white tv searching for 0 and 1 hehe).

To conclude the answer given above, the binary system uses 0 and 1 because it is a simple and efficient way to represent information using the two basic states of an electronic circuit and it allows for easy error detection, correction and data compression.

Now we are clear about why 0 and 1? but now we will be knowing How 0 and 1 work?

So, now we got the answer of why only 0 and 1. But still, we don’t know what to do with that 0 and 1. Let me tell you the inner processes that happen unnoticed but first, I need to tell you about ASCII (American Standard Code for Information Interchange). Don’t bother about what this complex name is.

Just remember that there is something called “ASCII”

ASCII

ASCII stands for American Standard Code for Information Interchange. It is a code that assigns a unique number (or code) to each character, such as letters, numbers, and symbols, that are used in text-based communications. This makes it easy for computers to store, process and transmit text data.

For example, the letter "A" is assigned code 65, the letter "B" is assigned code 66, and so on. This way, when a computer receives code 65, it knows to display the letter "A" on the screen. ASCII code allows the computer to understand and display text in a way that humans can read.

ASCII is a very old standard, it has been used for decades, and it contains only 128 characters, which includes capital letters, small letters, digits, punctuation marks and some control characters like new lines, tabs etc.

In simple words, ASCII is a way for computers to understand and display text, by assigning a unique number to each character.

ASCII (American Standard Code for Information Interchange) is used because it is a standard way to represent characters, such as letters, numbers, and symbols, in a digital format. It assigns a unique number (code) to each character, making it easy for computers to store and manipulate text data. ASCII is widely supported and is still commonly used today, especially in systems and applications that rely on plain text data.

twist

Let me tell you something, ASCII is an old-school chart, it is now completely replaced by Unicode**.** As you know, ASCII has only 256 characters whereas Unicode has more than a billion characters due to which nowadays smartphones have so many characters, emojis, GIFs, sound-emoji and unused characters in our smartphones. So this is the inner operations that I was saying before talking about ASCII

Unicode

Unicode is a universal character encoding standard that assigns unique numbers, called code points, to every character, symbol, and emoji used in all languages and scripts of the world. It was developed to address the limitations of older character encoding standards, such as ASCII, which could only represent a small number of characters.

Unlike ASCII, which can only represent 128 characters, Unicode can represent over a hundred thousand different characters and symbols. This includes not only the letters and numbers of various scripts (e.g. Latin, Greek, Chinese, Arabic, etc.), but also mathematical symbols, emoji, and even historical scripts.

Unicode is designed to be compatible with ASCII, so many of the characters in the ASCII character are set to have the same code point in Unicode. However, Unicode also includes many additional characters and symbols that are not found in ASCII.

Unicode is widely used in computer systems and applications, including operating systems, web browsers, and programming languages, to ensure that text is displayed correctly and consistently across different languages and platforms.

In simple terms, Unicode is a way for computers to understand and display text in any language or script, by assigning a unique number to each character

Remember our question is still unanswered :)

If I want to write HI message to my crush. Let's see the Unicode table above:

Using a Unicode code table is a little harder than ASCII

You can see:

  • H lies at 72

  • I lie at 73

    So, now you can visualize that whenever I send my crush a message of HI it will send decimals i.e. 72 and 73 numbers by converting them into 0s and 1s which are 01001000 01001001

Closing notes :)

We've just scratched the surface of what ASCII and Unicode are and how they work. In this blog post, we've covered the basics of these character encoding standards and why they're important for digital communications. But there's still so much more to learn about these topics, such as how a programming language works and how software is made using that language.

I'll be diving deeper into these topics in upcoming blog posts, so stay tuned! In the meantime, if you have any questions or comments about this post, please don't hesitate to reach out. Thanks for reading! :)