Analysis and Design of Algorithms: A Focus on Big O Notation

Categories: ScienceTechnology

Introduction

In mathematics, the exponent or logarithm is the inverse function of exponential functions, and the logarithm of a number is defined in relation to a base as the exponent raised on the basis of which that number is to be made. For instance, in base 10 the logarithm of 1000 is 3 since 1000 = 10 x 10 x 10 = 103. Nice, you can say if x = by then The logarithm of x for base b is y mathematically represented by the relationship logb x = y.

The logarithm was known to the Arabs in relation to the algorithmic universe, and in the early seventeenth century, scientist John Napier introduced the concept of logarithms to mathematics as a way to simplify calculations, So navigators, scientists , engineers, astronomers and others can count on them to calculate their calculations more easily using logarithms and logarithmic tables.

The word algorithm goes back to the Al-Khwarizmi Arab nation, where its name appears in English with the terms algorithm and Algorithm deriving from the word Algoritmi, the Latin form of its al-Khwarizmi name.

Get quality help now
WriterBelle
WriterBelle
checked Verified writer

Proficient in: Science

star star star star 4.7 (657)

“ Really polite, and a great writer! Task done as described and better, responded to all my questions promptly too! ”

avatar avatar avatar
+84 relevant experts are online
Hire writer

They also took advantage of logarithm properties by replacing them Multiplications to find a two-number logarithm with a property-dependent addition property: {\ displaystyle \ log _ {b} (xy) = \ log _ {b} (x) + \ log _ {b} (y). \,} {\ displaystyle \ log _ {b} (xy) = \ log _ {b} (x) + \ log _ {b} (y).

Leonhard Euler related the concept of logarithms to the eighteenth-century concept of exponential association, broadening the notion of logarithms and associating with disciple The logarithmic scale also benefits from a reduction in the graphic representation of big quantity fields.

Algorithm Definition

An algorithm is a finite set of well-defined instructions for solving a problem or performing a task.

Get to Know The Price Estimate For Your Paper
Topic
Number of pages
Email Invalid email

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy. We’ll occasionally send you promo and account related email

"You must agree to out terms of services and privacy policy"
Write my paper

You won’t be charged yet!

Originating from the work of 9th-century scholar Abu Jaafar Muhammad Ibn Musa Al-Khwarizmi, the term algorithm has evolved to encompass any step-by-step computational procedure. A good algorithm must have a specified input, a specified output, clarity, effectiveness, and finiteness.

Properties of Algorithms:

  1. Input: An algorithm should have 0 or more well-defined inputs.
  2. Output: At least one well-defined output must be produced by the algorithm.
  3. Definiteness: Each step of the algorithm must be clear and unambiguous.
  4. Finiteness: The algorithm must terminate after a finite number of steps.
  5. Effectiveness: All operations to be performed must be sufficiently basic that they can be done exactly and in a finite length of time.

Time Complexity and Space Complexity

The complexity of time is the number of operations performed by an algorithm to complete its task in terms of the input size (considering that each operation takes the same amount of time). The algorithm performing the task in the fewest number of operations is considered to be the most effective Size: The total number of elements present in the input is defined as input size. For a given problem, we appropriately characterize the input size n.

The time taken by an algorithm also depends on the computing speed of the system you are using, but we ignore those external factors and we are concerned only with the number of times that a particular statement is executed in relation to the size of the input. Let's say, the time taken to execute a statement is 1sec, then what is the time taken to execute n statements

Space Complexity of an algorithm denotes the total space used or needed by the algorithm for its working, for various input sizes. For ex:

vector myVec(n);

for(int i = 0; i < n; i++)

cin >> myVec[i];

In the example above we create a size n vector. Thus the spatial complexity of the above code is in the 'n' order , i.e. if n increases, the space requirement will also increase accordingly.

Even when you create a variable, you need a certain amount of space to run your algorithm. All of the space used for the algorithm is collectively called the algorithm 's Space Complexity.

NOTE: You'll be able to use 256 MB of space in normal programming for a specific problem. So, you can't create a size array more than 10 ^ 8 because you'll only be allowed to use 256 MB. Also, in a function, you can't create a size array of more than 10 ^ 6 because the maximum allotted space is 4 MB. So, you can create a global array to use a more sized array

Constant time: Is when the input size does not depend on the algorithm. ... If an algorithm performs 1 operation for an input size N (or any fixed number of operations), this is a constant time, because the number of operations does not change depending on the size of the input logarithmic time: To T(n) = O(log n). Since loga n and logb n are linked by a constant multiplier, and such a multiplier is irrelevant to the big-O classification, the standard usage for logarithmic-time algorithms is O(log n) regardless of the logarithm base that appears in the expression of T linear: The value between any two dots will never change on a linear scale. A logarithm is based on exponents, which are the superscripts next to another base number or variable, and above. On a logarithmic scale, the value in a given pattern between two points changes.

Line arithmetic: Floating-point algorithms for on-line addition / subtraction and multiplication were introduced by adding the notion of quasi-normalization to efficiently apply the on-line arithmetic to practical numerical problems. Those proposed are standardized FLPOL (on-line floating-point) algorithms of fixed precision.

Exponential: If T(n) is upper bounded by 2, where poly(n) is some polynomial in n, algorithm is exponential time. More formally, for any constant k, an algorithm is exponential time if T(n) is bounded by O(2nk).

Cubic: A Rubik's Cube algorithm is a puzzle procedure that reorients the pieces in some way. Mathematically the Rubik's Cube is a permutation group: an ordered set of 54 fields with 6 * 9 values (colors) on which we can perform operations (basic face rotations, cube turns and their combinations) reorienting the permutation group according to a pattern.

Asymptotic Notations.: Cases of asymptotic analysis, and algorithms of worst, average and best cases. The key concept of asymptotic analysis is to provide a measure of algorithm efficiency that does not rely on machine-specific constants and does not demand that algorithms be applied and that programs take time to compare.. Asymptotic notements are mathematical resources that reflect time complexity of asymptotic analysis algorithms. The following 3 asymptotic notations are mostly used to represent algorithm time complexity.

A simple way to get Theta notation of an expression is to drop low order terms and ignore leading constants. For example, consider the following expression.

3n3 + 6n2 + 6000 = Θ(n3)

Dropping lower order terms is always fine because there will always be a n0 after which Θ(n3) has higher values than Θn2) irrespective of the constants involved.

Algorithm time complexity calculation using big O notation and special cases:

To calculate the complexity of the algorithm - the time required to implement the algorithm - we have to take a standard that meets the following criteria:

  1. It has nothing to do with the algorithm's external effects (programming language used, device speed / compiler ... etc.).
  2. It's related to the nature of the work of the algorithm.

In the first condition the intention is that the complement The same, irrespective of the different developer hardware and tools available. The second condition means we have to find a way to measure the complexity depending on the nature of function of the algorithm. Each algorithm has a particular goal and you achieve that goal through multiple steps or processes. Unlike the design of an algorithm to organize a environment that is different from that of the algorithm for finding a cycle in a graph. In the first, the algorithm makes a comparison of numbers, and in the second, it traverses the shit

Conclusion

Big O notation is fundamental in the analysis and design of algorithms, providing a framework for comparing the efficiency of different algorithms. It allows developers to predict the behavior of algorithms in terms of execution time and space requirements as the input size grows. Understanding and applying Big O notation is crucial for creating efficient and scalable algorithms.

Updated: Feb 17, 2024
Cite this page

Analysis and Design of Algorithms: A Focus on Big O Notation. (2024, Feb 17). Retrieved from https://studymoose.com/document/analysis-and-design-of-algorithms-a-focus-on-big-o-notation

Live chat  with support 24/7

👋 Hi! I’m your smart assistant Amy!

Don’t know where to start? Type your requirements and I’ll connect you to an academic expert within 3 minutes.

get help with your assignment