Thread Rating:
  • 1 Vote(s) - 5 Average
  • 1
  • 2
  • 3
  • 4
  • 5

O(n) notation

#1
Software optimization is something that they never really teach in plain english. And it's hard to learn outside of computer science courses. 90% of what you will need, you can easily learn without school. But optimization is one of the things that is not easily taught outside of it. 

In computer science, we generally use notation known as Big O(n) (pronounce big O of n) to represent how fast an algorithm is. But when we're calculating the O(n) value, we don't time anything. We don't take any benchmarks. We don't find average values, or anything else. We care about one thing: The worst case scenario. 

You could have an algorithm that is generally lightning fast, but if there exists such a case at all where the algorithm is extremely slow (even if it's a rare case), it's considered a slow algorithm by Big O(n) notation. But why do we do this? 

We use the worst case scenario because software optimization concerns itself with the algorithm's canonical efficiency. We don't care how fast it runs on hardware. We care how many steps it tries to take. 

What's "N"?

N is, in short, the size of your data set, of some sort. It could be the number of bytes that you're trying to compress, or the number of items in a list that you're trying to sort, or the number of pixels in an image. It's the number of items that you have to run your algorithm on. 

What's O(n)? The best way to think about it is to (taking the worst possible scenario) benchmark it at some size N (we'll say 10). Let's suppose your worst possible case runs this algorithm on 10 items in one minute. Benchmark it again at size 20. How much longer did it take? 

In plain english: Let's say we're going to run a product rater on 10 products. It returns a list of a 1-5 star rating generated by some all-knowing algorithm, and tells you, in order, which ones you should buy first. We're gonna benchmark it. Then we're gonna run it again on 20 items, and compare. 
  • It ran in the same time. Congratulations, you have an O(1) algorithm. Your algorithm runs in constant time, no matter how many items are in your list. 
  • It took exactly twice as long (two minutes). Congratulations! You have an efficient algorithm, and it's O(n). This means that if you double your data size, your algorithm ONLY takes twice as long. 
  • It takes 10 times as long (ten minutes). You have an O(n squared) or worse. You doubled your data size, and your time increases by a power of 2. These algorithms are common. It's often unavoidable, but we try to make algorithms better than this when we can
  • Your algorithm took 3 or 4 minutes, but less than 10. Here, we have something that's in between O(n) and O (n^2). We didn't increase it exponentially, but it took longer than twice as long. These algorithms are usually O(n * log n) algorithms, but not always. These algorithms are very common out in the wild. 
This is the concept that we use on O(n) notation, but we don't calculate things like this at all. We do benchmark our code as programmers, but when we're getting the O(n), we use code analysis instead. Benchmarks are too fallible for this one. 

So, I mean, if we use code analysis, how does this work? Basically, we're concerned with loops. I'm going to write some pseudo code for an O(1) algorithm. This is one that runs in constant time, no matter what. 

Code:
print("Hello World. I have some gigantic list. I'm going to print the very first thing in it: ");
print(giganticList[0]); # Just prints the first element in some huge list

Most implementations of this will run this in constant time. No matter how big the list is, it always takes the same amount of time to get the first element. 

O(n)

Code:
mycounter = 0;
while (mycounter < 1000) {
  mycounter++;    
  print(mycounter);
}

Here, we're quite literally just wasting CPU cycles and counting up to 1,000, and printing it out every time we count up. If we make it count to 2,000, it will take twice as long. This is an O(n) algorithm. 

O(n squared):

This happens whenever we have a loop INSIDE of another loop. This is often necessary, but when we do this, we increase the amount of time required pretty significantly. 

Code:
mycounter = 0
my_other_counter = 0
my_limit = 1000

while (mycounter < my_limit) {
   my_counter++;
   my_other_counter = 0;

   while (my_other_counter < my_limit) {
      my_other_counter++; 
      print("My other counter is: "); 
      print(my_other_counter);
   }
}

This is an improved CPU waster. It now counts to 1000 exactly 1000 times. It will print out its result every single time it counts up, but once it's finished, it will redo it, and redo it again. Until it's redone it 1000 times, and counted a total of 1,000,000 times. 

This is an O(n squared) algorithm. It we increase the size that we're counting to, it will take much, much longer very quickly. 

What about N log N? Check out the heap sort algorithm to see a real world example of one of these. 

What about O(n^4)? I made one! It's highly optimized for average case scenarios. For the worst case scenario, it's absolutely abysmal. And thus, it's not considered a very good algorithm in O(n) notation.

Reply




Users browsing this thread: 2 Guest(s)

Dark/Light Theme Selector

Contact Us | Makestation | Return to Top | Lite (Archive) Mode | RSS Syndication 
Proudly powered by MyBB 1.8, © 2002-2024
Forum design by Makestation Team © 2013-2024