Number Systems and Computer Science

Compare the various formats of representing numbers in computers. Discuss briefly with the help of examples. Mention which is the most popular number representation and why do you think so

A number system is a way to represent numbers. We are used to using the decimal number system (base-10). Other common number systems include hexadecimal (base-16), octal (base-8), and binary (base-2). As far as computer systems are concerned, number systems can be classified into four categories:

  • Decimal Number System
  • Binary Number System
  • Octal Number System
  • Hexadecimal Number System

Decimal Number System

The term decimal is derived from a Latin prefix 'deci', which means ten.

Decimal number system has ten digits ranging from 0-9. Because this system has ten digits; it is also called a base ten number system or denary number system. A decimal number should always be written with a subscript 10 e.g. X10. But since this is the most widely used number system in the world, the subscript is usually understood and ignored in written work.

Get quality help now
writer-marian
writer-marian
checked Verified writer
star star star star 4.8 (309)

“ Writer-marian did a very good job with my paper, she got straight to the point, she made it clear and organized ”

avatar avatar avatar
+84 relevant experts are online
Hire writer

However, when many number systems are considered together, the subscript must always be put so as to differentiate the number systems.

Binary Number System

It uses two digits namely, 1 and 0 to represent numbers. unlike in decimal numbers where the place value goes up in factors of ten, in binary system, the place values increase by the factor of 2.binary numbers are written as X2.consider a binary number such as 10112.The right most digit has a place value of 1?20 while the left most has a place value of 1?23.

Get to Know The Price Estimate For Your Paper
Topic
Number of pages
Email Invalid email

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy. We’ll occasionally send you promo and account related email

"You must agree to out terms of services and privacy policy"
Write my paper

You won’t be charged yet!

Octal Number System

Consists of eight digits ranging from 0-7.the place value of octal numbers goes up in factors of eight from right to left.

Hexadecimal Number System

This is a base 16 number system that consists of sixteen digits ranging from 0-9 and letters A-F where A is equivalent to 10,B to 11 up to F which is equivalent to 15 in base ten system. The place value of hexadecimal numbers goes up in factors of sixteen.

A hexadecimal number can be denoted using 16 as a subscript or capital letter H to the right of the number .For example, 94B can be written as 94B16 or 94BH.

Most Popular Number Representation

Most popular number representation among the above four, in my point of view, is Decimal Number System. This is because it doesn't restrict us from witting it with its subscript 10 e.g. X10, like other number systems. As it is understood by most of us.

What do you mean by precision of representing a floating point number in the IEEE standard? Explain with small example

The IEEE standardized the computer representation for binary floating-point numbers in IEEE 754 (a.k.a. IEC 60559) in 1985. A new version, IEEE 754-2008, was published in August 2008, following a seven-year revision process, chaired by Dan Zuras and edited by Mike Cowlishaw. It replaced both IEEE 754-1985 (binary floating-point arithmetic) and IEEE 854-1987 Standard for Radix-Independent Floating-Point Arithmetic.The current version, IEEE 754-2019 published in July 2019, is derived from and replaces IEEE 754-2008, following a revision process started in September 2015, chaired by David G. Hough and edited by Mike Cowlishaw. It incorporates mainly clarifications and errata, but also includes some new recommended operations. The standard provides for many closely related formats, three of these are especially widely used in computer hardware and languages:

Single Precision

Single precision is usually used to represent the "float" type in the C language family (though this is not guaranteed). This is a binary format that occupies 32 bits (4 bytes) and its significant has a precision of 24 bits (about 7 decimal digits).

Double Precision

Double precision is usually used to represent the "double" type in the C language family (though this is not guaranteed). This is a binary format that occupies 64 bits (8 bytes) and its significant has a precision of 53 bits (about 16 decimal digits).

Double Extended

Double extended is also called "extended precision" format. This is a binary format that occupies at least 79 bits (80 if the hidden/implicit bit rule is not used) and its significant has a precision of at least 64 bits (about 19 decimal digits). A format satisfying the minimal requirements (64-bit precision, 15-bit exponent, thus fitting on 80 bits) is provided by the x86 architecture. In general on such processors, this format can be used with "long double" in the C language family (the C99 and C11 standards "IEC 60559 floating-point arithmetic extension- Annex F" recommend the 80-bit extended format to be provided as "long double" when available).

For example, If b = 10, p = 7 and e-max = 96, then e-min = ?95, the significant satisfies 0 ? c ? 9,999,999, and the exponent satisfies ?101 ? q ? 90. Consequently, the smallest non-zero positive number that can be represented is 1?10?101, and the largest is 9999999?1090 (9.999999?1096), so the full range of numbers is ?9.999999?1096 through 9.999999?1096. The numbers ?b1?emax and b1?emax (here, ?1?10?95 and 1?10?95) are the smallest (in magnitude) normal numbers; non-zero numbers between these smallest numbers are called subnormal numbers.

Computer Science

Applications in Modern day, Where CS/SE is heading to

Computer Science is the study of computers and computational systems. Unlike electrical and computer engineers, computer scientists deal mostly with software and software systems; this includes their theory, design, development, and application. Principal areas of study within Computer Science include artificial intelligence, computer systems and networks, security, database systems, human computer interaction, vision and graphics, numerical analysis, programming languages, software engineering, bioinformatics and theory of computing.

Although knowing how to program is essential to the study of computer science, it is only one element of the field. Computer scientists design and analyze algorithms to solve programs and study the performance of computer hardware and software. The problems that computer scientists encounter range from the abstract-- determining what problems can be solved with computers and the complexity of the algorithms that solve them - to the tangible - designing applications that perform well on handheld devices, that are easy to use, and that uphold security measures. Its fields can be divided into theoretical and practical disciplines. Computational complexity theory is highly abstract, while computer graphics emphasizes real-world applications.

Difference between CS/SE/CE

Computer Engineering (CE)

It deals with designing, developing, and operating computer systems.  At its core, Computer Engineering concentrates on digital hardware devices and computers, and the software that controls them. Computer Engineering emphasizes solving problems in digital hardware and at the hardware-software interface.

Software Engineering (SE)

It deals with building and maintaining software systems. It is more software-oriented and has a greater emphasis on large software applications than Computer Engineering. It is more applied than Computer Science, placing greater emphasis on the entire software development process, from idea to final product.

Computer Science (CS)

It focuses on understanding, designing, and developing programs and computers. At its core, Computer Science concentrates on data, data transformation, and algorithms. Advanced courses present specialized programming techniques and specific application domains. The CS program is less structured than the CE and SE programs, giving students more flexibility to build depth or breadth in a variety of application domains or in the fundamentals of Computer Science.

Surfing the Web

When you type in words on a search engine's web page, it give you back the results so quickly. (Search algorithms, parallel computing)

Playing Computer Science

  • Modern games look so astonishing, with all of their cool 3-D effects, and it all be rendered in real-time as you are playing and constantly changing the in-game environment. (Computer graphics)
  • The in-game enemies seem to be 'smart' and able to learn from your actions. (Artificial intelligence)
  • It is possible for us and dozens of other players to play online simultaneously and still have the game feel responsive most of the time. (Networking, client-server architecture)

Downloading media (legally)

  • File sharing programs like Bit-Torrent can perform so much faster than simply downloading from a website. (Networking, distributed algorithms)
  • Isn't it astounding that when you download a file, it always arrives at your computer intact in pristine condition, even though it had to travel through thousands of miles of unreliable copper wires? (Reliable networking protocols, error detection and correction)
  • High-quality photos, audio, and video be compressed so much (1/10 to 1/100 of original size) without losing much quality. (Loss compression algorithms)

Shopping online

  • You can be reasonably confident that nobody will steal your credit card number while you are shopping online. (Network security, cryptography)
  • The retailer can keep track of what items are in stock and report the results in real-time on their website. (Databases, web programming)
  • Some other applications of computer science are: Using our cellphones, neurotically updating your Facebook, Instagram, Twitter pages and stalking other people's profiles and Travelling on an airplane.

Future of Computer Science

The future of Computer Science may not be too bright.  Computers have become so pervasive a technology that I think the study of computing may soon be subsumed by other academic subjects and CS may lose its independence as an academic subject.  It wouldn't surprise me if in 20 years CS departments were to die off. Already computing has spawned several academic departments such as Information Technology, Software Engineering, and Computer Engineering, which are seldom integrated with a Computer Science department's curriculum.  Other computing sub disciplines have recently also spun off such as scientific computing / computational science, management science, digital graphical arts, and computer gaming / virtual reality.

Updated: Apr 12, 2021
Cite this page

Number Systems and Computer Science. (2019, Dec 19). Retrieved from https://studymoose.com/number-systems-and-computer-science-essay

Number Systems and Computer Science essay
Live chat  with support 24/7

👋 Hi! I’m your smart assistant Amy!

Don’t know where to start? Type your requirements and I’ll connect you to an academic expert within 3 minutes.

get help with your assignment