Sorting and Efficiency

In the past two weeks we got to talk about some sorting algorithms and the efficiency on them. We learned how to express the time complexity of algorithms, the upper bound and lower bound, big Oh and big Teta. I will try to talk about the sorting algorithms and their efficiency a little bit.

Sorting algorithms are made to sort some kind of information and put them on a certain order, like sorting a random list into a decreasing order, but can be used to extract precise information from big databases. Sorting has always been a necessity of the humanity, for information itself, sorting is very useful, and thanks to that, sort has always attracted a lot of research in computer science, sorting are common used on everyday tasks, like sorting products by smaller price, sorting by name, location, and so on. But the ones that we looked on class are the listing sort algorithms, like Quick Sort, Merge Sort, Count Sort and Select Sort.

As a mean to calculate efficiency of sorting algorithms you have to analyse it for a list of size n (aleatory natural number), and check how many steps would take to complete the code, for example, in this code:

This code finds a number on a list

test

 

For this simple code we can see that the while loop will be repeated the size of the list times ( i will call it n) if the element y is not on the list, this would be the worst case, so for a tight bound on big Oh worst case we would have line #1 executed 1 time, line #3 to line #6 executed 3n times(1 for the condition check on line #3 at each loop until it fails, 1 time for the if in each loop and 1 time to the i assignment for each loop),and 1 time for the condition fail on the while loop on line 3,  and line #7 executed one time to return a failure. That would give us a O(3n+3), the most important information on this notation is the 3n, because it will show us how the algorithm runtime changes with the size n list.

Let’s do on a sorting algorithm now:

test

Line # 7 : it will run 1 time

Line #8: n times (n = size of list)

Line #9: it’s calling a function that looks for the min of the list, it has to go trough all the list, so it will be n times( this function could be replaced by a for loop)

Line #10: it will run 1 time, just to give the index of the smaller item on the list

Line #11: 1 time each loop to trade the i item at the list with the j item at the list ( the smaller one found before)

Line #12 : 1 time

In the end, we can have for Selection sort O(1 + n(n+2) +1) that will be O(n^2 + 2n + 2), so we can say that it has a O(n^2) complexity. In selection sort doesn’t matter if the list is already sorted or if the list is random or if is sorted in the contrary order, it will always perform on a quadratic complexity O(n^2). Did you understand everything so far?

Most of the algorithms could assume different runtime variations with a list of n size, it could be quadratic, like Quick Sort, Logarithmic, like Merge Sort,  linear, cubic, exponential (2^n) or even the worst case (n^n). But it will always depend on the input list, if its already sorted, or if is totally random, or if is on decreasing order, or increasing order. Each sorting algorithm runs differently on each type of input. In the last lab we did a comparative on the runtime of the sorting codes on Phyton, excluding the Phyton built in sort, the most effective on random lists was the Quick sort, on sorted lists was the Merge Sort and on reversed sorted lists was the Merge Sort again.

So you must be asking yourself, what’s the difference on the algorithms, all they do is sorting a list. Well, the difference comes in the way they do things.

Quick Sort: it chooses a pivot, then get all the numbers that are bigger than the pivot to the right of it, and all that are smaller to the left. Then we know that the pivot is in the right place, then it calls QuickSort on each of the other parts until the list is fully sorted. They choosing off the pivot is really important, because it could get your runtime to be much bigger with a bad pivot picked.

Merge Sort: Divides the list in half, sort the halves and merge then together again.

Selection Sort: This one it’s the simplest, you just take item by item in the list and compare it to each other item of the list, to put it in the right place.

Bubble Sort: The idea it’s to go trough the list a bunch of times, and in each time get the biggest element to the end of the list, by comparing adjacent elements and putting them in order. If it goes one whole time without swapping place it knows that the listed is sorted, so it will always need to do one more check.

Here it’s a very interesting way to learn about sorting algorithms, it’s a playlist with 6 short videos:

Assignment 2-2 and Exercise 3

Exercise 3 was not so hard, the best way to do it was writing some cases on paper and checking the patterns, go trough a couple of examples and only after that begin writing your code, the key on this exercise as well as any recursive problem, is to build strong base cases, work the code piece, by piece. I just wish that the auto tester would run a couple more times on the last day at last.

As for Assignment 2-2, it was kinda of hard, but 3 people could do it in a day, I guess. The assignment was all about recursivity. Some of the function code look a lot alike. We had some trouble with the doctest for the all_regex_permutations, because the result on my code would always be on a random order on the set. But except for that was not so hard as de first part, because this time we knew exactly what we were suppose to do.

Assignment 02

Well, this week was all about assignment 02, me and my partner had a difficult time trying to figure out what the assignment really wanted us to do, we spend 8h or so doing something that later on we discovered to be the second part, how to transform a regex string on a RegexTree. We dig some info on the course forum and then we finnaly discover how to begin, we didn’t manage to end it, but we sure did our best. stackoverflow and the lectures notes were of great help as well, my advice is, whenever you fell like you are stuck on something, take a 10 minute break, than comeback, read everything again, look at the lectures notes and you will definitely have some insight and find a way to move.

Week 7 – Recursion, again? Yeah

Hi there again, this week we have a long post, Let`s talk about Recursion, what it is, where it come from, what it feeds off, and so on.

First of all I would like to start with a picture to give you an example of recursion, totally not related with computer science.Recursion

Here we have a example of an infinite recursion. So, can you already tell me what is recursion?

Ok, lets’s try another example, imagine yourself tired of studying, you say to yourself, ” Ok, time for a break, let me watch a episode of How I Met Your Mother on Netflix”, than after you finished the episode you say to yourself, ” Ok, just one more”, and then go on doing the same thing until you get to the break point – No More Episodes – than you go and return to your studies. So, can you guess what`s the recursion here? No? Ok, I will explain a little bit more, You are doing the same thing, watching an episode than finishing and watching another, this can be said as recursively watching it, until you get to your break point of the recursion, no more episodes, and then you realise, the break point is the most important thing on recursion, what if the episodes were infinite, than you would never study again.

Now that you got the grasp about what is recursion, let’s discuss it as programmers, or programmer wanna be. One of the easiest ways of understanding recursion it’s through a comparative with a iteration program. Let me show you to pieces of code that do the same thing, one does it iteratively and the other recursively.Untitled

The first one is the iterative one, the result receives the value of n then it does a for where it sums the variable i to the result and then increment it. So the output of the function for sum_it(3) would be:

– result = 3 –> result = 3 + 1 –> result = 4 + 2 –> result = 6

On the recursive function it will first find a case where it knows the result, than work back so it can give you the final result, the track of the function sum_rec(3) would be:

– result = 3 –> (1) result = result + sum_rec(2) –> (2)result = result + sum_rec(1) — > (3)result = result + sum_rec(0) –> result = 6

Do you know what happened here? No? Ok, let me try to explain.
The only thing that we are sure is that sum_rec(0) is 0 ok? because if you sum the 0 first numbers of the naturals it will be zero, duuuuuh. Ok, so, now that we know that sum_rec(0) it’s equal zero, the we know that sum_rec(1) it’s equal result + sum_rec(0), sum_rec(1) = 1 + 0, than we know that sum_rec(2) it’s equal to result + sum_rec(1), sum_rec(2) = 2 + 1, now we can do the sum_rec(3), the one that we want, that would be equal to result + sum_rec(2), sum_rec(3) = 3 +3, so the final result would be 6. We made from the easiest case to the one that we want, that’s recursion.

So, why use recursion? Pros and Cons

Well, recursive codes are  more elegant, simpler to write and easier to understand most of the time. But for some problems recursion ends up using more memory from the stack and slowing it effectiveness in comparative with an iterative function that does the same thing. Here is a link to a performance comparative that I’ve found. Some programmers advise to avoid recursion, because it can easily causes a stack overflow, other say that is a good approach to write code recursively, because is easier to understand and to write. In the end, it will always depend of the problem you have.

Hope you guys could understand a bit more of recursion with this post.

Cya o/

week #6

Hi there everyone, this week was assignment week, it was really fun to try to resolve Tower of Anne Hoy, last year, I did a similar program for Tower of Hanoi in Java. This time I got stuck on the recursive part, the choosing of the “n” to be more exact, on the algorithm given by the handout, my stack wouldn’t stop give me “pop from empty stack” error, I’ve spend a couple of hours trying to make it work, but I didn’t succeed. But I did manage to make the function do all the job in 2^n – 1 moves.

Recursion

This week we discussed about recursion and the assignment 01, I’ve already have a little background on recursion, so was not that difficult to get, the turtle example made me remember a lot of things, recursion programs are not that hard to write, the challenge most of the times is to find the break point, and what to do so it works. This week I’m going to start working on my Assignment. The labs are going pretty smooth, the only one that made me lose some sleep was the second one, where we had to come up with a solution to improve the time on the queue pop process, I’m still not sure how to do it, but maybe at the reading week I can try a little bit more.

Intro and Object Oriented Programming

Hi, I’m a Brazilian Student at the University of Toronto, and this is my blog for the CSC148 , Introduction to Computer Science. My program in Brazil is not Computer Science, is Digital Media and Systems, i am part of a major scholarship program from my country. I think that this is enough for introductions.

Object Oriented Programming –

That’s not my first time with an Object Oriented Language, I did a course back in Brazil, which the name was Object Oriented Programming, we learn some things about it with Java. Here I’m seeing Phyton for the first time, it’s a little different. One really nice way to learn a new language is using codeacademy.com, its simple, fast and easy.

Object Oriented Programming is different from procedural programming in certain aspects, while in procedural programming we focus on writing functions to work on data, in OOP we focus on create objects that contain data and functions. OOP has powerful tools to make a programmer life easier, Inheritance, reuse, and other things, OOP is more like real world, where an object have parts ( data, variables) and do things(functions) and that variables/attributes can be change dynamically within the main . The class is like a blueprint, an instruction, and with that, we can create objects that have the same parts but with slightly differences, for example, I have a Car class, it show me that it has wheels, color, engine, transmission,  a list of extra features ( like MP3 player, Bluetooth, and etc). We can make a bunch of different cars with the same blueprint, just need to change the value of the attributes, and that was one of the big things OOP came with. Another really nice thing is the Inheritance, where a class, can inherit some methods and data from another class, like car would inherit from vehicles, we can choose what to inherit, we are not obligated to get everything. Take a Car and a Plain as example, both have an engine, wheels and color, and both of them move in some way (that would be a function) all this could be written on the class Vehicle and then to reduce rewrite we could make a class Car and a class Plain, that would Inherit  from Vehicles. We can have the function move rewrite from each of the new class, Plain.move() would be “Fly” and Car.move() would be “Drive”, it’s the same method signature but with different returns that`s called a function override, there is the overload of a function too, where we change the parameters that the function receives.

Data protection is another thing that`s worth talk about, in OOP we can encapsulate the attributes and functions to make sure that only the right “person” or function we be able to change them. In Java we have 4 types of protection:

  • Protected
  • Private
  • Public
  • Default

Each one with their own properties, Public can be accessed by anyone, classes, subclasses and another packages from the same application. Protected will only be accessed by classes and subclasses from the same package. Private the only one that can access the information is the class itself, so you have to create methods to show that information and to modify it from other places. I don`t really know if in Phyton we can do the same way, but I will definitely try to learn about it.

This Three things that I talked about here,  Inheritance, Abstraction(Class, Override, Overload) and encapsulation, are in my opinion the pillars of Object Oriented Programming.