If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

To log in and use all the features of Khan Academy, please enable JavaScript in your browser.

## Computers and the Internet

Course: computers and the internet > unit 1.

- How do computers represent data?

## Binary & data

- Bits (binary digits)

## Want to join the conversation?

- Upvote Button navigates to signup page
- Downvote Button navigates to signup page
- Flag Button navigates to signup page

## Video transcript

Code With C

The Way to Programming

- C Tutorials
- Java Tutorials
- Python Tutorials
- PHP Tutorials
- Java Projects

Understanding Bits and Bytes: Exploring Data Representation

## Understanding Bits and Bytes

Hey there, tech-savvy pals! Today, we’re going to unravel the mysteries of data representation by delving into the realm of bits and bytes. 🤓 Let’s break it down like never before!

## Definition of Bits and Bytes

First things first, let’s talk about what bits and bytes actually are. Bits are the smallest units of data in computing, represented as either a 0 or a 1. On the other hand, a byte consists of 8 bits . Yep, you heard that right, 8 bits cozying up together in a byte!

## Purpose and Importance of Understanding Bits and Bytes

Now, you might be wondering, “Why do I need to wrap my head around these bits and bytes?” Well, buckle up because understanding them is like holding the key to a secret coding treasure chest! From programming to data storage , bits and bytes are the building blocks of the digital world.

## Data Representation with 8 Bits in a Byte

Understanding the binary system.

Ah, the binary system, where everything boils down to 0s and 1s. It’s like the yin and yang of the computing world, representing off and on states. With just two digits, the binary system works wonders in storing and processing data efficiently .

## How 8 Bits in a Byte Represent Data

Picture this: each bit in a byte is like a tiny switch that can either be off (0) or on (1). By combining 8 of these switches, we can create unique patterns to represent different characters, numbers, or instructions in a computer.

## Number of Possible Values with 8 Bits in a Byte

Understanding the concept of combinations and permutations.

Now, let’s dive into the math realm for a bit. To calculate the number of possible values with 8 bits in a byte, we need to unleash the power of combinations and permutations. It’s all about the art of arranging those 0s and 1s in various ways!

## Calculation of Possible Values with 8 Bits in a Byte

So, the burning question is, how many possible values can we have with the 8 bits snugly nestled in a byte? Drumroll, please! 🥁 It’s a whopping 256 different values! That’s right, from 00000000 to 11111111, the possibilities are endless.

## Comparison with Other Data Representation Systems

Comparison with decimal and hexadecimal systems.

Let’s shake things up a bit by comparing our trusty 8-bit byte with other data representation systems like decimal and hexadecimal. While decimal uses 10 digits (0-9) and hexadecimal employs 16 characters (0-9, A-F), our byte stands strong with its compact yet powerful 8-bit structure.

## Advantages and Disadvantages of Using 8 Bits in a Byte for Data Representation

Ah, the age-old debate of pros and cons! The 8-bit byte offers a balance between efficiency and complexity. It’s great for representing a wide range of values efficiently, but it might fall short when handling large datasets that require more precision.

## Applications of Data Representation with 8 Bits in a Byte

Use of 8 bits in a byte in computer systems.

From processing instructions to storing images and text, the humble 8-bit byte plays a crucial role in the inner workings of computer systems. It’s like the unsung hero silently powering our digital world behind the scenes.

## Impact of Data Representation on Computer Processing and Storage

Imagine a world without efficient data representation—chaos, right? By optimizing how data is represented using 8 bits in a byte, we can enhance computer processing speed, reduce storage requirements, and pave the way for smarter technologies.

Overall, understanding the basics of bits and bytes opens up a treasure trove of possibilities in the vast landscape of computing. So, embrace the binary charm, dive into the world of data representation, and remember: when it doubt, byte it out! 💻✨

Random Fact: Did you know that the term “byte” was coined by Dr. Werner Buchholz in 1956 while working on the IBM Stretch computer? 🤯

In closing, keep coding, keep exploring, and may the bits be ever in your favor! 🚀

## Program Code – Understanding Bits and Bytes: Exploring Data Representation

Code output:, code explanation:.

This program is all about working with the inner bits and bytes, the OG Morse code of computing if y’know what I mean. So grab your coder goggles; it’s about to get bit-y.

First, we’ve got to_binary() . This bad boy takes an integer and spits out a binary string so neat you could tie it with a bow. You can even set the number of bits for extra fun.

Next on deck is to_hex() . Ever need to cook up an integer in hex form with that chef’s kiss ‘0x’ garnish? This function has got your back.

If you’re drowning in a sea of binary with binary_to_int() , this lifeboat turns that ‘101010’ SOS into a cozy integer island you can land on.

But wait, there’s more! hex_to_int() is like your hex translator, turning those wacky ‘0x’ phrases into plain old integers you can understand.

Let’s talk text messages—old school style with ascii_to_binary() . Give it a string and bam, you have a binary sequence that practically telegraphs your message.

And if you ever need to decode those binary smoke signals back into text, binary_to_ascii() is the rescue chopper bringing you back to civilization aka ASCII land.

The main event is all about showing off these functions like proud parents at a school play. We’ve got number play, binary banter, hex chit-chat, and even a lovely ASCII-bian dramatization.

So in a nutshell, this code is the ultimate translator, bridging the gap between human-friendly and computer-lingo. Creating this and trying to explain it feels like teaching your grandma to Snapchat… but hey, I hope y’all find it as nifty as a new Spotify playlist! Thanks for tuning in, and keep on coding in the free world! ✌️👩💻

## You Might Also Like

Revolutionary feedback control project for social networking: enhancing information spread, the evolution of web development: past, present, and future, defining the future: an overview of software services, exploring the layers: the multifaceted world of software, deep dive: understanding the ‘case when’ clause in sql.

## Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

## Latest Posts

## Cutting-Edge Artificial Intelligence Project Unveiled in Machine Learning World

## Enhancing Exams with Image Processing: E-Assessment Project

## Cutting-Edge Blockchain Projects for Cryptocurrency Enthusiasts – Project

## Artificial Intelligence Marvel: Cutting-Edge Machine Learning Project

## Personalized Affective Feedback Project: Deep Learning Solutions for Student Frustration in IT

Privacy overview.

Sign in to your account

Username or Email Address

Remember Me

## Data representation

Computers use binary - the digits 0 and 1 - to store data. A binary digit, or bit, is the smallest unit of data in computing. It is represented by a 0 or a 1. Binary numbers are made up of binary digits (bits), eg the binary number 1001. The circuits in a computer's processor are made up of billions of transistors. A transistor is a tiny switch that is activated by the electronic signals it receives. The digits 1 and 0 used in binary reflect the on and off states of a transistor. Computer programs are sets of instructions. Each instruction is translated into machine code - simple binary codes that activate the CPU. Programmers write computer code and this is converted by a translator into binary instructions that the processor can execute. All software, music, documents, and any other information that is processed by a computer, is also stored using binary. [1]

To include strings, integers, characters and colours. This should include considering the space taken by data, for instance the relation between the hexadecimal representation of colours and the number of colours available.

This video is superb place to understand this topic

- 1 How a file is stored on a computer
- 2 How an image is stored in a computer
- 3 The way in which data is represented in the computer.
- 6 Standards
- 7 References

## How a file is stored on a computer [ edit ]

## How an image is stored in a computer [ edit ]

The way in which data is represented in the computer. [ edit ].

To include strings, integers, characters and colours. This should include considering the space taken by data, for instance the relation between the hexadecimal representation of colours and the number of colours available [3] .

This helpful material is used with gratitude from a computer science wiki under a Creative Commons Attribution 3.0 License [4]

## Sound [ edit ]

- Let's look at an oscilloscope
- The BBC has an excellent article on how computers represent sound

## See Also [ edit ]

Standards [ edit ].

- Outline the way in which data is represented in the computer.

## References [ edit ]

- ↑ http://www.bbc.co.uk/education/guides/zwsbwmn/revision/1
- ↑ https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/
- ↑ IBO Computer Science Guide, First exams 2014
- ↑ https://compsci2014.wikispaces.com/2.1.10+Outline+the+way+in+which+data+is+represented+in+the+computer

A unit of abstract mathematical system subject to the laws of arithmetic.

A natural number, a negative of a natural number, or zero.

Give a brief account.

- Computer organization
- Very important ideas in computer science

- Engineering Mathematics
- Discrete Mathematics
- Operating System
- Computer Networks
- Digital Logic and Design
- C Programming
- Data Structures
- Theory of Computation
- Compiler Design
- Computer Org and Architecture

## Binary Representations in Digital Logic

- Non-binary Counter in Digital Logic
- Representation of Negative Binary Numbers
- Adders and Subtractors in Digital Logic
- Binary Decoder in Digital Logic
- Binary representation of previous number
- Digital Logic Design - Quiz Questions
- Floating Point Representation - Basics
- Array Multiplier in Digital Logic
- Full Subtractor in Digital Logic
- Representation of Boolean Functions
- Half Subtractor in Digital Logic
- Fixed Point Representation
- Maximum 0's between two immediate 1's in binary representation
- What is Digital Logic ?
- Introduction of Floating Point Representation
- Binary representation of a given number
- Find value of k-th bit in binary representation
- XOR counts of 0s and 1s in binary representation
- Bitwise Logical Operations in R

Binary is a base-2 number system that uses two states 0 and 1 to represent a number. We can also call it to be a true state and a false state. A binary number is built the same way as we build the normal decimal number .

For example, a decimal number 45 can be represented as 4*10^1+5*10^0 = 40+5

Now in binary 45 is represented as 101101. As we have powers of 10 in decimal number similarly there are powers of 2 in binary numbers. Hence 45 which is 101101 in binary can be represented as:

The binary number is traversed from left to right.

Sign and Magnitude representation – There are many ways for representing negative integers. One of the way is sign-magnitude. This system uses one bit to indicate the sign. Mathematical numbers are generally made up of a sign and a value. The sign indicates whether the number is positive, (+) or negative, (–) while the value indicates the size of the number.

For example 13, +256 or -574. Presenting numbers in this way is called sign-magnitude representation since the left most digit can be used to indicate the sign and the remaining digits the magnitude or value of the number.

Sign-magnitude notation is the simplest and one of the most common methods of representing positive and negative numbers. Thus negative numbers are obtained simply by changing the sign of the corresponding positive number, for example, +2 and -2, +10 and -10, etc. Similarly adding a 1 to the front of a binary number is negative and a 0 makes it positive.

For example 0101101 represents +45 and 1101101 represents -45 if 6 digits of a binary number are considered and the leftmost digit represents the sign.

But a problem with the sign-magnitude method is that it can result in the possibility of two different bit patterns having the same binary value. For example, +0 and -0 would be 0000 and 1000 respectively as a signed 4-bit binary number. So using this method there can be two representations for zero, a positive zero 0000 and also a negative zero 1000 which can cause big complications for computers and digital systems.

The 2 complement notations used to represent signed magnitude numbers are:

1. One’s complement – One’s Complement is a method which can be used to represent negative binary numbers in a signed binary number system. In one’s complement, positive numbers remain unchanged as before.

Negative numbers however, are represented by taking the one’s complement of the unsigned positive number. Since positive numbers always start with a 0, the complement will always start with a 1 to indicate a negative number.

The one’s complement of a negative binary number is the complement of its positive, so to take the one’s complement of a binary number, all we need to do is subtract 1’s equal to the number of digits present in the number from that number. This can also be achieved by just interchanging the digits of the number. Thus the one’s complement of 1 is 0 and vice versa.

For example One’s Complement of 1010100:

The one’s complement of number can also be obtained by just interchanging the digits of the binary number .

2. Two’s complement – Two’s Complement is another method like one’s complement form, which we can use to represent negative binary numbers in a signed binary number system. In two’s complement, the positive numbers are exactly the same as before for unsigned binary numbers. A negative number, however, is represented by a binary number, which when added to its corresponding positive equivalent results in zero.

In two’s complement representation, a negative number is the 2’s complement of its positive number. If the subtraction of two numbers is X – Y then it can be represented as X + (2’s complement of Y).

The two’s complement is one’s complement + 1 of a number in binary .

The main advantage of two’s complement over the previous one’s complement is that there is no double-zero problem and it is a lot easier to generate the two’s complement of a signed binary number. In two’s complement arithmetic operations are relatively easier to perform when the numbers are represented in the two’s complement format.

For example to represent -27 27 in binary is: 00011011

The above 2 are the formats that can be too long practically. So real number representations are used.

Real number representations – The goal is to represent a number with a decimal point in binary using the form. IEEE 754 standard defines how to encode a real number. This standard offers a way to code a number using 32 bits (as well as 64 bits), and defines three components:

- The plus/minus sign is represented by one bit, the highest-weighted bit (furthest to the left).
- The exponent is encoded using 8 bits (11 bits in 64 bit representation) immediately after the sign.
- The mantissa (the bits after the decimal point) with the remaining 23 bits(52 bits in 64 bit representation).

## Please Login to comment...

Similar reads.

- binary-representation
- Digital Logic
- School Programming

## Improve your Coding Skills with Practice

## What kind of Experience do you want to share?

- Chat with a consultant

## ARCHIVED: What are bits, bytes, and other units of measure for digital information?

Note: The following information is provided in part by the Extreme Science and Engineering Discovery Environment ( XSEDE ), a National Science Foundation ( NSF ) project that provides researchers with advanced digital resources and services that facilitate scientific discovery. For more, see the XSEDE website .

A bit is a binary digit, the smallest increment of data on a computer. A bit can hold only one of two values: 0 or 1, corresponding to the electrical values of off or on, respectively.

Because bits are so small, you rarely work with information one bit at a time. Bits are usually assembled into a group of eight to form a byte . A byte contains enough information to store a single ASCII character, like "h".

A kilobyte (KB) is 1,024 bytes, not one thousand bytes as might be expected, because computers use binary (base two) math, instead of a decimal (base ten) system.

Computer storage and memory is often measured in megabytes (MB) and gigabytes (GB). A medium-sized novel contains about 1 MB of information. 1 MB is 1,024 kilobytes, or 1,048,576 (1024x1024) bytes, not one million bytes.

Similarly, one 1 GB is 1,024 MB, or 1,073,741,824 (1024x1024x1024) bytes. A terabyte (TB) is 1,024 GB; 1 TB is about the same amount of information as all of the books in a large library, or roughly 1,610 CDs worth of data. A petabyte (PB) is 1,024 TB. 1 PB of data, if written on DVDs, would create roughly 223,100 DVDs, i.e., a stack about 878 feet tall, or a stack of CDs a mile high. Indiana University is now building storage systems capable of holding petabytes of data. An exabyte (EB) is 1,024 PB. A zettabyte (ZB) is 1,024 EB. Finally, a yottabyte (YB) is 1,024 ZB.

Many hard drive manufacturers use a decimal number system to define amounts of storage space. As a result, 1 MB is defined as one million bytes, 1 GB is defined as one billion bytes, and so on. Since your computer uses a binary system as mentioned above, you may notice a discrepancy between your hard drive's published capacity and the capacity acknowledged by your computer. For example, a hard drive that is said to contain 10 GB of storage space using a decimal system is actually capable of storing 10,000,000,000 bytes. However, in a binary system, 10 GB is 10,737,418,240 bytes. As a result, instead of acknowledging 10 GB, your computer will acknowledge 9.31 GB. This is not a malfunction but a matter of different definitions.

We count in base 10 by powers of 10:

Computers count by base 2:

So in computer jargon, the following units are used:

Note: The names and abbreviations for numbers of bytes are easily confused with the notations for bits. The abbreviations for numbers of bits use a lower-case "b" instead of an upper-case "B". Since one byte is made up of eight bits, this difference can be significant. For example, if a broadband Internet connection is advertised with a download speed of 3.0 M b ps, its speed is 3.0 mega bits per second, or 0.375 mega bytes per second (which would be abbreviated as 0.375 M B ps). Bits and bit rates (bits over time, as in bits per second [bps]) are most commonly used to describe connection speeds, so pay particular attention when comparing Internet connection providers and services.

This document was developed with support from National Science Foundation ( NSF ) grants 1053575 and 1548562 . Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF.

This is document ackw in the Knowledge Base. Last modified on 2018-01-18 10:52:24 .

## Chapter 2 . Binary and Number Representation

1 binary — the basis of computing, 1 . 1 binary theory, 1 . 1 . 1 introduction.

Binary is a base-2 number system that uses two mutually exclusive states to represent information. A binary number is made up of elements called bits where each bit can be in one of the two possible states. Generally, we represent them with the numerals 1 and 0 . We also talk about them being true and false. Electrically, the two states might be represented by high and low voltages or some form of switch turned on or off.

We build binary numbers the same way we build numbers in our traditional base 10 system. However, instead of a one's column, a 10's column, a 100's column (and so on) we have a one's column, a two's columns, a four's column, an eight's column, and so on, as illustrated below.

For example, to represent the number 203 in base 10, we know we place a 3 in the 1's column, a 0 in the 10's column and a 2 in the 100's column. This is expressed with exponents in the table below.

Or, in other words, 2 × 10 2 + 3 × 10 0 = 200 + 3 = 203. To represent the same thing in binary, we would have the following table.

That equates to 2 7 + 2 6 + 2 3 +2 1 + 2 0 = 128 + 64 + 8 + 2 + 1 = 203.

## 1 . 1 . 2 The basis of computing

You may be wondering how a simple number is the basis of all the amazing things a computer can do. Believe it or not, it is! The processor in your computer has a complex but ultimately limited set of instructions it can perform on values such as addition, multiplication, etc. Essentially, each of these instructions is assigned a number so that an entire program (add this to that, multiply by that, divide by this and so on) can be represented by a just a stream of numbers. For example, if the processor knows operation 2 is addition, then 252 could mean "add 5 and 2 and store the output somewhere". The reality is of course much more complicated (see Chapter 3, Computer Architecture ) but, in a nutshell, this is what a computer is.

In the days of punch-cards, one could see with their eye the one's and zero's that make up the program stream by looking at the holes present on the card. Of course this moved to being stored via the polarity of small magnetic particles rather quickly (tapes, disks) and onto the point today that we can carry unimaginable amounts of data in our pocket.

Translating these numbers to something useful to humans is what makes a computer so useful. For example, screens are made up of millions of discrete pixels , each too small for the human eye to distinguish but combining to make a complete image. Generally each pixel has a certain red, green and blue component that makes up its display color. Of course, these values can be represented by numbers, which of course can be represented by binary! Thus any image can be broken up into millions of individual dots, each dot represented by a tuple of three values representing the red, green and blue values for the pixel. Thus given a long string of such numbers, formatted correctly, the video hardware in your computer can convert those numbers to electrical signals to turn on and off individual pixels and hence display an image.

As you read on, we will build up the entire modern computing environment from this basic building block; from the bottom-up if you will!

## 1 . 1 . 3 Bits and Bytes

As discussed above, we can essentially choose to represent anything by a number, which can be converted to binary and operated on by the computer. For example, to represent all the letters of the alphabet we would need at least enough different combinations to represent all the lower case letters, the upper case letters, numbers and punctuation, plus a few extras. Adding this up means we need probably around 80 different combinations.

If we have two bits, we can represent four possible unique combinations ( 00 01 10 11 ). If we have three bits, we can represent 8 different combinations. In general, with n bits we can represent 2 n unique combinations.

8 bits gives us 2 8 = 256 unique representations, more than enough for our alphabet combinations. We call a group of 8 bits a byte . Guess how big a C char variable is? One byte.

## 1 . 1 . 3 . 1 ASCII

Given that a byte can represent any of the values 0 through 255, anyone could arbitrarily make up a mapping between characters and numbers. For example, a video card manufacturer could decide that 1 represents A , so when value 1 is sent to the video card it displays a capital 'A' on the screen. A printer manufacturer might decide for some obscure reason that 1 represented a lower-case 'z', meaning that complex conversions would be required to display and print the same thing.

To avoid this happening, the American Standard Code for Information Interchange or ASCII was invented. This is a 7-bit code, meaning there are 2 7 or 128 available codes.

The range of codes is divided up into two major parts; the non-printable and the printable. Printable characters are things like characters (upper and lower case), numbers and punctuation. Non-printable codes are for control, and do things like make a carriage-return, ring the terminal bell or the special NULL code which represents nothing at all.

127 unique characters is sufficient for American English, but becomes very restrictive when one wants to represent characters common in other languages, especially Asian languages which can have many thousands of unique characters.

To alleviate this, modern systems are moving away from ASCII to Unicode , which can use up to 4 bytes to represent a character, giving much more room!

## 1 . 1 . 3 . 2 Parity

ASCII, being only a 7-bit code, leaves one bit of the byte spare. This can be used to implement parity which is a simple form of error checking. Consider a computer using punch-cards for input, where a hole represents 1 and no hole represents 0. Any inadvertent covering of a hole will cause an incorrect value to be read, causing undefined behaviour.

Parity allows a simple check of the bits of a byte to ensure they were read correctly. We can implement either odd or even parity by using the extra bit as a parity bit .

In odd parity, if the number of 1's in the 7 bits of information is odd, the parity bit is set, otherwise it is not set. Even parity is the opposite; if the number of 1's is even the parity bit is set to 1.

In this way, the flipping of one bit will case a parity error, which can be detected.

XXX more about error correcting

## 1 . 1 . 3 . 3 16, 32 and 64 bit computers

Numbers do not fit into bytes; hopefully your bank balance in dollars will need more range than can fit into one byte! All most all general-purpose architectures are at least 32 bit computers. This means that their internal registers are 32-bits (or 4-bytes) wide, and that operations generally work on 32-bit values. We refer to 4 bytes as a word ; this is analogous to language where letters (bits) make up words in a sentence, except in computing every word has the same size! The size of a C int variable is 32 bits. Modern architectures are 64 bits, which doubles the size the processor works with to 8 bytes.

## 1 . 1 . 3 . 4 Kilo, Mega and Giga Bytes

Computers deal with a lot of bytes; that's what makes them so powerful! We need a way to talk about large numbers of bytes, and a natural way is to use the "International System of Units" (SI) prefixes as used in most other scientific areas. So for example, kilo refers to 10 3 or 1000 units, as in a kilogram has 1000 grams.

1000 is a nice round number in base 10, but in binary it is 1111101000 which is not a particularly "round" number. However, 1024 (or 2 10 ) is a round number — ( 10000000000 — and happens to be quite close to the base 10 meaning value of "kilo" (1000 as opposed to 1024). Thus 1024 bytes naturally became known as a kilobyte . The next SI unit is "mega" for 10 6 and the prefixes continue upwards by 10 3 (corresponding to the usual grouping of three digits when writing large numbers). As it happens, 2 20 is again close to the SI base 10 definition for mega; 1048576 as opposed to 1000000. Increasing the base 2 units by powers of 10 remains functionally close to the SI base 10 value, although each increasing factor diverges slightly further from the base SI meaning. Thus the SI base-10 units are "close enough" and have become the commonly used for base 2 values.

It can be very useful to commit the base 2 factors to memory as an aid to quickly correlate the relationship between number-of-bits and "human" sizes. For example, we can quickly calculate that a 32 bit computer can address up to four gigabytes of memory by noting that 2 32 can recombine to 2 (2 + 30) or 2 2 × 2 30 , which is just 4 × 2 30 , where we know 2 30 is a gigabyte. A 64-bit value could similarly address up to 16 exabytes ( 2 4 × 2 60 ); you might be interested in working out just how big a number this is. To get a feel for how big that number is, calculate how long it would take to count to 2 64 if you incremented once per second.

## 1 . 1 . 3 . 5 Kilo, Mega and Giga Bits

Apart from the confusion related to the overloading of SI units between binary and base 10, capacities will often be quoted in terms of bits rather than bytes. Generally this happens when talking about networking or storage devices; you may have noticed that your ADSL connection is described as something like 1500 kilobits/second. The calculation is simple; multiply by 1000 (for the kilo), divide by 8 to get bytes and then 1024 to get kilobytes (so 1500 kilobits/s=183 kilobytes per second).

The SI standardisation body has recognised these dual uses and has specified unique prefixes for binary usage. Under the standard 1024 bytes is a kibibyte , short for kilo binary byte (shortened to KiB). The other prefixes have a similar prefix (Mebibyte, MiB, for example). Tradition largely prevents use of these terms, but you may seem them in some literature.

## 1 . 1 . 3 . 6 Conversion

The easiest way to convert between bases is to use a computer, after all, that's what they're good at! However, it is often useful to know how to do conversions by hand.

The easiest method to convert between bases is repeated division . To convert, repeatedly divide the quotient by the base, until the quotient is zero, making note of the remainders at each step. Then, write the remainders in reverse, starting at the bottom and appending to the right each time. An example should illustrate; since we are converting to binary we use a base of 2.

Reading from the bottom and appending to the right each time gives 11001011 , which we saw from the previous example was 203.

## 1 . 1 . 4 Boolean Operations

George Boole was a mathematician who discovered a whole area of mathematics called Boolean Algebra . Whilst he made his discoveries in the mid 1800's, his mathematics are the fundamentals of all computer science. Boolean algebra is a wide ranging topic, we present here only the bare minimum to get you started.

Boolean operations simply take a particular input and produce a particular output following a rule. For example, the simplest boolean operation, not simply inverts the value of the input operand. Other operands usually take two inputs, and produce a single output.

The fundamental Boolean operations used in computer science are easy to remember and listed below. We represent them below with truth tables ; they simply show all possible inputs and outputs. The term true simply reflects 1 in binary.

## 1 . 1 . 4 . 1 Not

Usually represented by ! , not simply inverts the value, so 0 becomes 1 and 1 becomes 0

## 1 . 1 . 4 . 2 And

To remember how the and operation works think of it as "if one input and the other are true, result is true

## 1 . 1 . 4 . 3 Or

To remember how the or operation works think of it as "if one input or the other input is true, the result is true

## 1 . 1 . 4 . 4 Exclusive Or (xor)

Exclusive or, written as xor is a special case of or where the output is true if one, and only one, of the inputs is true. This operation can surprisingly do many interesting tricks, but you will not see a lot of it in the kernel.

## 1 . 1 . 5 How computers use boolean operations

Believe it or not, essentially everything your computer does comes back to the above operations. For example, the half adder is a type of circuit made up from boolean operations that can add bits together (it is called a half adder because it does not handle carry bits). Put more half adders together, and you will start to build something that can add together long binary numbers. Add some external memory, and you have a computer.

Electronically, the boolean operations are implemented in gates made by transistors . This is why you might have heard about transistor counts and things like Moore's Law. The more transistors, the more gates, the more things you can add together. To create the modern computer, there are an awful lot of gates, and an awful lot of transistors. Some of the latest Itanium processors have around 460 million transistors.

## 1 . 1 . 6 Working with binary in C

In C we have a direct interface to all of the above operations. The following table describes the operators

We use these operations on variables to modify the bits within the variable. Before we see examples of this, first we must divert to describe hexadecimal notation.

## 1 . 2 Hexadecimal

Hexadecimal refers to a base 16 number system. We use this in computer science for only one reason, it makes it easy for humans to think about binary numbers. Computers only ever deal in binary and hexadecimal is simply a shortcut for us humans trying to work with the computer.

So why base 16? Well, the most natural choice is base 10, since we are used to thinking in base 10 from our every day number system. But base 10 does not work well with binary -- to represent 10 different elements in binary, we need four bits. Four bits, however, gives us sixteen possible combinations. So we can either take the very tricky road of trying to convert between base 10 and binary, or take the easy road and make up a base 16 number system -- hexadecimal!

Hexadecimal uses the standard base 10 numerals, but adds A B C D E F which refer to 10 11 12 13 14 15 (n.b. we start from zero).

Traditionally, any time you see a number prefixed by 0x this will denote a hexadecimal number.

As mentioned, to represent 16 different patterns in binary, we would need exactly four bits. Therefore, each hexadecimal numeral represents exactly four bits. You should consider it an exercise to learn the following table off by heart.

Of course there is no reason not to continue the pattern (say, assign G to the value 16), but 16 values is an excellent trade off between the vagaries of human memory and the number of bits used by a computer (occasionally you will also see base 8 used, for example for file permissions under UNIX). We simply represent larger numbers of bits with more numerals. For example, a sixteen bit variable can be represented by 0xAB12 , and to find it in binary simply take each individual numeral, convert it as per the table and join them all together (so 0xAB12 ends up as the 16-bit binary number 1010101100010010 ). We can use the reverse to convert from binary back to hexadecimal.

We can also use the same repeated division scheme to change the base of a number. For example, to find 203 in hexadecimal

Hence 203 in hexadecimal is 0xCB .

## 1 . 3 Practical Implications

1 . 3 . 1 use of binary in code.

Whilst binary is the underlying language of every computer, it is entirely practical to program a computer in high level languages without knowing the first thing about it. However, for the low level code we are interested in a few fundamental binary principles are used repeatedly.

## 1 . 3 . 2 Masking and Flags

1 . 3 . 2 . 1 masking.

In low level code, it is often important to keep your structures and variables as space efficient as possible. In some cases, this can involve effectively packing two (generally related) variables into one.

Remember each bit represents two states, so if we know a variable only has, say, 16 possible states it can be represented by 4 bits (i.e. 2 4 =16 unique values). But the smallest type we can declare in C is 8 bits (a char ), so we can either waste four bits, or find some way to use those left over bits.

We can easily do this by the process of masking . This uses the rules of logical operations to extract values.

The process is illustrated in the figure below. We can keep two separate 4-bit values "inside" a single 8-bit character. We consider the upper four-bits as one value (blue) and the lower 4-bits (red) as another. To extract the lower four bits, we set our mask to have the lower-4 bits set to 1 ( 0x0F ). Since the logical and operation will only set the bit if both bits are 1 , those bits of the mask set to 0 effectively hide the bits we are not interested in.

To get the top (blue) four bits, we would invert the mask; in other words, set the top 4 bits to 1 and the lower 4-bits to 0 . You will note this gives a result of 1010 0000 (or, in hexadecimal 0xA0 ) when really we want to consider this as a unique 4-bit value 1010 ( 0x0A ). To get the bits into the right position we use the right shift operation 4 times, giving a final value of 0000 1010 .

Setting the bits requires the logical or operation. However, rather than using 1 's as the mask, we use 0 's. You should draw a diagram similar to the above figure and work through setting bits with the logical or operation.

## 1 . 3 . 2 . 2 Flags

Often a program will have a large number of variables that only exist as flags to some condition. For example, a state machine is an algorithm that transitions through a number of different states but may only be in one at a time. Say it has 8 different states; we could easily declare 8 different variables, one for each state. But in many cases it is better to declare one 8 bit variable and assign each bit to flag flag a particular state.

Flags are a special case of masking, but each bit represents a particular boolean state (on or off). An n bit variable can hold n different flags. See the code example below for a typical example of using flags -- you will see variations on this basic code very often.

- school Campus Bookshelves
- menu_book Bookshelves
- perm_media Learning Objects
- login Login
- how_to_reg Request Instructor Account
- hub Instructor Commons

## Margin Size

- Download Page (PDF)
- Download Full Book (PDF)
- Periodic Table
- Physics Constants
- Scientific Calculator
- Reference & Cite
- Tools expand_more
- Readability

selected template will load here

This action is not available.

## Binary and Number Representation

- Last updated
- Save as PDF
- Page ID 81567

- Patrick McClanahan
- San Joaquin Delta College

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

## Chapter 2. Binary and Number Representation

Binary — the basis of computing, binary theory, introduction.

Binary is a base-2 number system that uses two mutually exclusive states to represent information. A binary number is made up of elements called bits where each bit can be in one of the two possible states. Generally, we represent them with the numerals 1 and 0 . We also talk about them being true and false. Electrically, the two states might be represented by high and low voltages or some form of switch turned on or off.

We build binary numbers the same way we build numbers in our traditional base 10 system. However, instead of a one's column, a 10's column, a 100's column (and so on) we have a one's column, a two's columns, a four's column, an eight's column, and so on, as illustrated below.

For example, to represent the number 203 in base 10, we know we place a 3 in the 1's column, a 0 in the 10's column and a 2 in the 100's column. This is expressed with exponents in the table below.

Or, in other words, 2 × 10 2 + 3 × 10 0 = 200 + 3 = 203. To represent the same thing in binary, we would have the following table.

That equates to 2 7 + 2 6 + 2 3 +2 1 + 2 0 = 128 + 64 + 8 + 2 + 1 = 203.

## The basis of computing

You may be wondering how a simple number is the basis of all the amazing things a computer can do. Believe it or not, it is! The processor in your computer has a complex but ultimately limited set of instructions it can perform on values such as addition, multiplication, etc. Essentially, each of these instructions is assigned a number so that an entire program (add this to that, multiply by that, divide by this and so on) can be represented by a just a stream of numbers. For example, if the processor knows operation 2 is addition, then 252 could mean "add 5 and 2 and store the output somewhere". The reality is of course much more complicated (see Chapter 3, Computer Architecture ) but, in a nutshell, this is what a computer is.

In the days of punch-cards, one could see with their eye the one's and zero's that make up the program stream by looking at the holes present on the card. Of course this moved to being stored via the polarity of small magnetic particles rather quickly (tapes, disks) and onto the point today that we can carry unimaginable amounts of data in our pocket.

Translating these numbers to something useful to humans is what makes a computer so useful. For example, screens are made up of millions of discrete pixels , each too small for the human eye to distinguish but combining to make a complete image. Generally each pixel has a certain red, green and blue component that makes up it's display color. Of course, these values can be represented by numbers, which of course can be represented by binary! Thus any image can be broken up into millions of individual dots, each dot represented by a tuple of three values representing the red, green and blue values for the pixel. Thus given a long string of such numbers, formatted correctly, the video hardware in your computer can convert those numbers to electrical signals to turn on and off individual pixels and hence display an image.

As you read on, we will build up the entire modern computing environment from this basic building block; from the bottom-up if you will!

## Bits and Bytes

As discussed above, we can essentially choose to represent anything by a number, which can be converted to binary and operated on by the computer. For example, to represent all the letters of the alphabet we would need at least enough different combinations to represent all the lower case letters, the upper case letters, numbers and punctuation, plus a few extras. Adding this up means we need probably around 80 different combinations.

If we have two bits, we can represent four possible unique combinations ( 00 01 10 11 ). If we have three bits, we can represent 8 different combinations. In general, with n bits we can represent 2 n unique combinations.

8 bits gives us 2 8 = 256 unique representations, more than enough for our alphabet combinations. We call a group of 8 bits a byte . Guess how big a C char variable is? One byte.

Given that a byte can represent any of the values 0 through 255, anyone could arbitrarily make up a mapping between characters and numbers. For example, a video card manufacturer could decide that 1 represents A , so when value 1 is sent to the video card it displays a capital 'A' on the screen. A printer manufacturer might decide for some obscure reason that 1 represented a lower-case 'z', meaning that complex conversions would be required to display and print the same thing.

To avoid this happening, the American Standard Code for Information Interchange or ASCII was invented. This is a 7-bit code, meaning there are 2 7 or 128 available codes.

The range of codes is divided up into two major parts; the non-printable and the printable. Printable characters are things like characters (upper and lower case), numbers and punctuation. Non-printable codes are for control, and do things like make a carriage-return, ring the terminal bell or the special NULL code which represents nothing at all.

127 unique characters is sufficient for American English, but becomes very restrictive when one wants to represent characters common in other languages, especially Asian languages which can have many thousands of unique characters.

To alleviate this, modern systems are moving away from ASCII to Unicode , which can use up to 4 bytes to represent a character, giving much more room!

ASCII, being only a 7-bit code, leaves one bit of the byte spare. This can be used to implement parity which is a simple form of error checking. Consider a computer using punch-cards for input, where a hole represents 1 and no hole represents 0. Any inadvertent covering of a hole will cause an incorrect value to be read, causing undefined behaviour.

Parity allows a simple check of the bits of a byte to ensure they were read correctly. We can implement either odd or even parity by using the extra bit as a parity bit .

In odd parity, if the number of 1's in the 7 bits of information is odd, the parity bit is set, otherwise it is not set. Even parity is the opposite; if the number of 1's is even the parity bit is set to 1.

In this way, the flipping of one bit will case a parity error, which can be detected.

XXX more about error correcting

## 16, 32 and 64 bit computers

Numbers do not fit into bytes; hopefully your bank balance in dollars will need more range than can fit into one byte! Modern architectures are at least 32 bit computers. This means they work with 4 bytes at a time when processing and reading or writing to memory. We refer to 4 bytes as a word ; this is analogous to language where letters (bits) make up words in a sentence, except in computing every word has the same size! The size of a C int variable is 32 bits. Modern architectures are 64 bits, which doubles the size the processor works with to 8 bytes.

## Kilo, Mega and Giga Bytes

Computers deal with a lot of bytes; that's what makes them so powerful! We need a way to talk about large numbers of bytes, and a natural way is to use the "International System of Units" (SI) prefixes as used in most other scientific areas. So for example, kilo refers to 10 3 or 1000 units, as in a kilogram has 1000 grams.

1000 is a nice round number in base 10, but in binary it is 1111101000 which is not a particularly "round" number. However, 1024 (or 2 10 ) is a round number — ( 10000000000 — and happens to be quite close to the base 10 meaning value of "kilo" (1000 as opposed to 1024). Thus 1024 bytes naturally became known as a kilobyte . The next SI unit is "mega" for 10 6 and the prefixes continue upwards by 10 3 (corresponding to the usual grouping of three digits when writing large numbers). As it happens, 2 20 is again close to the SI base 10 definition for mega; 1048576 as opposed to 1000000. Increasing the base 2 units by powers of 10 remains functionally close to the SI base 10 value, although each increasing factor diverges slightly further from the base SI meaning. Thus the SI base-10 units are "close enough" and have become the commonly used for base 2 values.

SI units compared in base 2 and base 10

It can be very useful to commit the base 2 factors to memory as an aid to quickly correlate the relationship between number-of-bits and "human" sizes. For example, we can quickly calculate that a 32 bit computer can address up to four gigabytes of memory by noting the recombination of 2 2 (4) + 2 30 . A 64-bit value could similarly address up to 16 exabytes (2 4 + 2 60 ); you might be interested in working out just how big a number this is. To get a feel for how big that number is, calculate how long it would take to count to 2 64 if you incremented once per second.

## Kilo, Mega and Giga Bits

Apart from the confusion related to the overloading of SI units between binary and base 10, capacities will often be quoted in terms of bits rather than bytes. Generally this happens when talking about networking or storage devices; you may have noticed that your ADSL connection is described as something like 1500 kilobits/second. The calculation is simple; multiply by 1000 (for the kilo), divide by 8 to get bytes and then 1024 to get kilobytes (so 1500 kilobits/s=183 kilobytes per second).

The SI standardisation body has recognised these dual uses and has specified unique prefixes for binary usage. Under the standard 1024 bytes is a kibibyte , short for kilo binary byte (shortened to KiB). The other prefixes have a similar prefix (Mebibyte, MiB, for example). Tradition largely prevents use of these terms, but you may seem them in some literature.

The easiest way to convert between bases is to use a computer, after all, that's what they're good at! However, it is often useful to know how to do conversions by hand.

The easiest method to convert between bases is repeated division . To convert, repeatedly divide the quotient by the base, until the quotient is zero, making note of the remainders at each step. Then, write the remainders in reverse, starting at the bottom and appending to the right each time. An example should illustrate; since we are converting to binary we use a base of 2.

Reading from the bottom and appending to the right each time gives 11001011 , which we saw from the previous example was 203.

## Boolean Operations

George Boole was a mathematician who discovered a whole area of mathematics called Boolean Algebra . Whilst he made his discoveries in the mid 1800's, his mathematics are the fundamentals of all computer science. Boolean algebra is a wide ranging topic, we present here only the bare minimum to get you started.

Boolean operations simply take a particular input and produce a particular output following a rule. For example, the simplest boolean operation, not simply inverts the value of the input operand. Other operands usually take two inputs, and produce a single output.

The fundamental Boolean operations used in computer science are easy to remember and listed below. We represent them below with truth tables ; they simply show all possible inputs and outputs. The term true simply reflects 1 in binary.

Usually represented by ! , not simply inverts the value, so 0 becomes 1 and 1 becomes 0

To remember how the and operation works think of it as "if one input and the other are true, result is true

To remember how the or operation works think of it as "if one input or the other input is true, the result is true

## Exclusive Or (xor)

Exclusive or, written as xor is a special case of or where the output is true if one, and only one, of the inputs is true. This operation can surprisingly do many interesting tricks, but you will not see a lot of it in the kernel.

## How computers use boolean operations

Believe it or not, essentially everything your computer does comes back to the above operations. For example, the half adder is a type of circuit made up from boolean operations that can add bits together (it is called a half adder because it does not handle carry bits). Put more half adders together, and you will start to build something that can add together long binary numbers. Add some external memory, and you have a computer.

Electronically, the boolean operations are implemented in gates made by transistors . This is why you might have heard about transistor counts and things like Moore's Law. The more transistors, the more gates, the more things you can add together. To create the modern computer, there are an awful lot of gates, and an awful lot of transistors. Some of the latest Itanium processors have around 460 million transistors.

## Working with binary in C

In C we have a direct interface to all of the above operations. The following table describes the operators

We use these operations on variables to modify the bits within the variable. Before we see examples of this, first we must divert to describe hexadecimal notation.

## Hexadecimal

Hexadecimal refers to a base 16 number system. We use this in computer science for only one reason, it makes it easy for humans to think about binary numbers. Computers only ever deal in binary and hexadecimal is simply a shortcut for us humans trying to work with the computer.

So why base 16? Well, the most natural choice is base 10, since we are used to thinking in base 10 from our every day number system. But base 10 does not work well with binary -- to represent 10 different elements in binary, we need four bits. Four bits, however, gives us sixteen possible combinations. So we can either take the very tricky road of trying to convert between base 10 and binary, or take the easy road and make up a base 16 number system -- hexadecimal!

Hexadecimal uses the standard base 10 numerals, but adds A B C D E F which refer to 10 11 12 13 14 15 (n.b. we start from zero).

Traditionally, any time you see a number prefixed by 0x this will denote a hexadecimal number.

As mentioned, to represent 16 different patterns in binary, we would need exactly four bits. Therefore, each hexadecimal numeral represents exactly four bits. You should consider it an exercise to learn the following table off by heart.

Of course there is no reason not to continue the pattern (say, assign G to the value 16), but 16 values is an excellent trade off between the vagaries of human memory and the number of bits used by a computer (occasionally you will also see base 8 used, for example for file permissions under UNIX). We simply represent larger numbers of bits with more numerals. For example, a sixteen bit variable can be represented by 0xAB12, and to find it in binary simply take each individual numeral, convert it as per the table and join them all together (so 0xAB12 ends up as the 16-bit binary number 1010101100010010 ). We can use the reverse to convert from binary back to hexadecimal.

We can also use the same repeated division scheme to change the base of a number. For example, to find 203 in hexadecimal

Hence 203 in hexadecimal is 0xCB .

## Practical Implications

Use of binary in code.

Whilst binary is the underlying language of every computer, it is entirely practical to program a computer in high level languages without knowing the first thing about it. However, for the low level code we are interested in a few fundamental binary principles are used repeatedly.

## Masking and Flags

In low level code, it is often important to keep your structures and variables as space efficient as possible. In some cases, this can involve effectively packing two (generally related) variables into one.

Remember each bit represents two states, so if we know a variable only has, say, 16 possible states it can be represented by 4 bits (i.e. 2 4 =16 unique values). But the smallest type we can declare in C is 8 bits (a char ), so we can either waste four bits, or find some way to use those left over bits.

We can easily do this by the process of masking . This uses the rules of logical operations to extract values.

The process is illustrated in the figure below. We can keep two separate 4-bit values "inside" a single 8-bit character. We consider the upper four-bits as one value (blue) and the lower 4-bits (red) as another. To extract the lower four bits, we set our mask to have the lower-4 bits set to 1 ( 0x0F ). Since the logical and operation will only set the bit if both bits are 1 , those bits of the mask set to 0 effectively hide the bits we are not interested in.

To get the top (blue) four bits, we would invert the mask; in other words, set the top 4 bits to 1 and the lower 4-bits to 0 . You will note this gives a result of 1010 0000 (or, in hexadecimal 0xA0 ) when really we want to consider this as a unique 4-bit value 1010 ( 0x0A ). To get the bits into the right position we use the right shift operation 4 times, giving a final value of 0000 1010 .

Setting the bits requires the logical or operation. However, rather than using 1 's as the mask, we use 0 's. You should draw a diagram similar to the above figure and work through setting bits with the logical or operation.

Often a program will have a large number of variables that only exist as flags to some condition. For example, a state machine is an algorithm that transitions through a number of different states but may only be in one at a time. Say it has 8 different states; we could easily declare 8 different variables, one for each state. But in many cases it is better to declare one 8 bit variable and assign each bit to flag flag a particular state.

Flags are a special case of masking, but each bit represents a particular boolean state (on or off). An n bit variable can hold n different flags. See the code example below for a typical example of using flags -- you will see variations on this basic code very often.

## IMAGES

## VIDEO

## COMMENTS

Computers use bits (binary digits) to represent data as ones and zeroes. Bits are the smallest piece of information a computer can store. Explore how computers use the binary number system to represent numbers, text, images, and sound with electrical signals in their circuits.

The bit is the most basic unit of information in computing and digital communications.The name is a portmanteau of binary digit. The bit represents a logical state with one of two possible values.These values are most commonly represented as either " 1" or "0 ", but other representations such as true/false, yes/no, on/off, or +/− are also widely used.. The relation between these values and ...

The Power of Bits • The fundamental unit of memory inside a computer is called a bit—a term introduced in a paper by Claude Shannon as a contraction of the words binary digit. • An individual bit exists in one of two states, usually denoted as 0 and 1. • More sophisticated data can be represented by combining larger numbers of bits:

Binary is a base-2 number system that uses two mutually exclusive states to represent information. A binary number is made up of elements called bits where each bit can be in one of the two possible states. Generally, we represent them with the numerals 1 and 0. We also talk about them being true and false.

8 bits representing a number between 0 and 255. an alphabetic character like W or + or 7. part of a character in another alphabet or writing system (2+ bytes) part of a larger number (2 or 4 or 8 bytes, usually) part of a picture or sound. part of an instruction for a computer to execute.

Bits, bytes, and representation of information ... - part of an instruction for a computer to execute instructions are just bits, stored in the same memory as data ... - a bit is a "binary digit", that is, a number that is either 0 or 1 - computers ultimately represent and process everything as bits

The 8-bit byte offers a balance between efficiency and complexity. It's great for representing a wide range of values efficiently, but it might fall short when handling large datasets that require more precision. Applications of Data Representation with 8 Bits in a Byte Use of 8 Bits in a Byte in Computer Systems

This is easy for unsigned values: simply add leading zeros to the representation (called "zero extension"). unsigned short s = 4; // short is a 16-bit format, so. s = 0000 0000 0000 0100b. unsigned int i = s; // conversion to 32-bit int, so i = 0000 0000 0000 0000 0000 0000 0000 0100b. For signed values, we want the number to remain the same ...

In practice, a negative number in two's complement is obtained by inverting all the bits of its positive counterpart*, and then adding 1, or: x = ~x + 1. Example: The number 2 is represented as normal in binary: 0010. -2 is represented by inverting the bits, and adding 1: 0010 ☞ 1101 1101 + 1 1110.

Data representation. Computers use binary - the digits 0 and 1 - to store data. A binary digit, or bit, is the smallest unit of data in computing. It is represented by a 0 or a 1. Binary numbers are made up of binary digits (bits), eg the binary number 1001. The circuits in a computer's processor are made up of billions of transistors.

Lecture 3: Bits, Bytes, Binary. Bits, bytes, binary numbers, and the representation of information. • computers represent, process, store, copy, and transmit everything as numbers. - hence "digital computer". • the numbers can represent anything. - not just numbers that you might do arithmetic on. • the meaning depends on context.

Data Representation Computer Organization I 8 CS@VT ©2005-2020 WD McQuain 2's Complement Observations To negate an integer, with one exception*, just invert the bits and add 1. The sign of the integer is indicated by the leading bit. There is only one representation of the value 0. 25985: 0110 0101 1000 0001-25985: 1001 1010 0111 1111

Binary is a base-2 number system that uses two states 0 and 1 to represent a number. We can also call it to be a true state and a false state. A binary number is built the same way as we build the normal decimal number . For example, a decimal number 45 can be represented as 4*10^1+5*10^0 = 40+5. Now in binary 45 is represented as 101101.

8 bits representing a number between 0 and 255. an alphabetic character like W or + or 7. part of a character in another alphabet or writing system (2 bytes) part of a larger number (2 or 4 or 8 bytes, usually) part of a picture or sound. part of an instruction for a computer to execute.

A bit is a binary digit, the smallest increment of data on a computer. A bit can hold only one of two values: 0 or 1, corresponding to the electrical values of off or on, respectively. Because bits are so small, you rarely work with information one bit at a time. Bits are usually assembled into a group of eight to form a byte.

Chapter 2. Binary and Number Representation 1 Binary — the basis of computing 1. 1 Binary Theory 1. 1. 1 Introduction. Binary is a base-2 number system that uses two mutually exclusive states to represent information. A binary number is made up of elements called bits where each bit can be in one of the two possible states. Generally, we represent them with the numerals 1 and 0.

computers represent, process, store, copy, and transmit everything as numbers. hence "digital computer". the numbers can represent anything. not just numbers that you might do arithmetic on. the meaning depends on context. as well as what the numbers ultimately represent. e.g., numbers coming to your computer or phone from your wi-fi connection ...

Example: The number -5 is represented in two's complements as: 1011. 5 is represented by inverting the bits, and adding 1: 1011 ☞ 0100 0100 + 1 0101. Shortcut: start from the right, and write down numbers until you get to a 1: 11 Now invert all the rest of the digits: 0101.

Data Representation Computer Organization I 7 CS @VT ©2005-2014 McQuain 2's Complement Observations To negate an integer, with one exception*, just invert the bits and add 1. The sign of the integer is indicated by the leading bit. There is only one representation of the value 0. 25985: 0110 0101 1000 0001-25985: 1001 1010 0111 1111

For example, we can quickly calculate that a 32 bit computer can address up to four gigabytes of memory by noting the recombination of 2 2 (4) + 2 30. A 64-bit value could similarly address up to 16 exabytes (2 4 + 2 60 ); you might be interested in working out just how big a number this is.