Wednesday 5 December 2012

Last Week of Classes


This is the last week of class for first semester and let me finish it off with an interesting question!

Source: http://www.math.osu.edu/~maharry.1/ProblemsOfTheMonth/ProblemsOfTheMonthJanuary2006.html

Find the smallest number (other than 1) in the sequence {1 , 11 , 111 , 1111 , 11111, ...} that is a perfect square (like 25 or 36 or 144) or prove that none of them are perfect squares.

11 is not a perfect square
111 is not a perfect square
1111 is not a perfect square
11111 is not a perfect square
111111 is not a perfect square

Intuition:
                I think none of them are perfect squares.

                The recursive function of the sequence (ith term):
                f(i) =      1                              if i = 0
                                f(i-1) + 10i            if i > 0
For the (i + 1)th term, with 1 as the first term. Each of these is a finite geometric series, so we have:







Now we need to know what perfect squares are.
We have two cases, odd and even.
For odd integer:
Let n be an arbitrary odd integer, where n = 2m+1
(2m+1)2  = 4m2+4m+1
                   = 4(m2+m)+1
                   = 4k+1                #where k = m2+m

For even integer:
Let n be an arbitrary even integer, where n = 2m
(2m)2 = 4m2
          = 4k, where k = m2



So perfect squares can be written in forms 4k+1 or 4k; which k is an arbitrary integer larger than or equal to zero.


Proof (By contradiction):
                Assume the sequence f(i) has a perfect square number. Then we can write f(i) in form of 4k + 1 or 4k. But the close form of f(i) = 
where k = 10i+1
a contradiction! So the sequence f(i) contains no perfect squares.
 Conclude sequence f(i) contains no perfect squares by contradiction.


I am not sure if my explanation is correct, but I gave my best shot on this question without anyone’s help. Feel free to comment on my solution.

P.S. Happy studying for finals everyone~

Sunday 25 November 2012

Assignment 3

Assignment 3 is quite interesting. Reading over it, I think that question 1 is actually the harder question than the rest. I listed out all the binary strings of length of 4 as told in the hints but still couldn't really get the intuition on how to correctly answer it. Question 2 should not be difficult since it is just proving termination, but the loop termination condition is y == 0 and y is not really clear to see that it will be 0 at the last iteration. Question 3 should not be difficult to draw the machine, but proving it will be more challenging since we haven't really done much proving on DFSAs. Last but not least, question 4, just by looking at it, I can see that it will be very time consuming to finish this question. Designing the algorithm can take some time and the proof can be very long. I should get starting soon since I only have one week left to do.

Sunday 18 November 2012

DFSA

So last week we did some definite finite state automata.
I personally think that these are much easier to understand than the materials we did before. Drawing machines that accepts a language is mostly logical if you put some thought on it.
However, it is only the start and I am expecting more challenge this week.

Sunday 11 November 2012

This Week...

This has been a busy week for me because of another csc course, csc207. I didn't had much time doing any csc236 related materials since I had a lot to work on csc207 assignment. Been working every day this week and even stayed in Bahen Centre till midnight at 1 on Friday with my group members to work on it. After I got home, I worked till 5:30am, and then slept for 4 hours then continue working again. I hope the following week will be better for me. Luckily, tomorrow is a break. I can finally have a good sleep.

Sorry if this is not csc236 related, but this week I've been occupied by other courses.

Sunday 4 November 2012

Midterm tomorrow

Another midterm exam is coming up tomorrow. Last time I did a pretty awful job. Reviewing back my old exam, I found our that I could've done so much better. Silly mistakes killed me. This time I will be more careful, and try not to miss anything that should be written. Sorry that this post is so short, but I really have to continue studying.
See you guys again next week~

Sunday 28 October 2012

Master Theorem

So for the past week, we've been doing Master Theorem. It was quite confusing at the beginning, since it is hard to believe that complicated recursive functions runtime can be analysed so easily with it. So here's the "formula" for Master Theorem:


T(n) = (n^d)             if a < b^d
          (n^d log n)     if a = b^d
          (n^(logb a))    if a > b^d

I found some exercises online and figured out that it actually works. It is a short-cut or like a cheat sheet without going through the long process of unwinding to get to the answer. This is definitely one of the most interesting theorems that I've ever encounter with.

Wednesday 17 October 2012

Unwinding...

So today's lecture we went through the complexity of merge sort. Before proving it, most of us already know that the worst case is O(n lg n), but I was amazed by the unwinding process of breaking the code into pieces and analysing it to reach the point that shows us it's n lg n. Here is the following code:


MergeSort(A,b,e):
    if b == e: return
    m = (b + e) / 2
    MergeSort(A,b,m)
    MergeSort(A,m+1,e)
    # merge sorted A[b..m] and A[m+1..e] back into A[b..e]
    for i = b,...,e: B[c] = A[c]
    c = b
    d = m+1
    for i = b,...,e:
        if d > e or (c <= m and B[c] < B[d]):
            A[i] = B[c]
            c = c + 1
        else: # d <= e and (c > m or B[c] >= B[d])
            A[i] = B[d]
            d = d + 1

In lecture, we treated all the lines that don't depend on n as constant 1 (which make things simpler), and lines with loop as n. In the code, you can see that it calls itself (recursion) on line 4 and 5 with length of half of the original list. So we can let them have time complexity of T(⌈n/2⌉) and T(⌊n/2⌋). At the end, the function for the time complexity is:

    T(n) = {1 if n=1
                 T(⌈n/2⌉) +  T(⌊n/2⌋) + n + 1 if n>1}

By unwinding, we use T(2^k). So,
    T(2^k) = 2T(⌈2^k/2⌉) + 2^k + 1
                = 2T(2^(k-1)) + 2^k + 1
                = 2(T(2^(k-1)) + 2^k + 1) + 2^k + 1
                = 2^2 T(2^(k-2)) + 2(2^k) + 3
                = 2^2 (2 T(2^(k-3)) + 2^(k-2) + 1) + 2(2^k) + 3
                = 2^3 T(2^(k-3)) + 3(2^k) + 7
                ...
                = 2^i  T(2^(k-i) + i 2^k + 2^i - 1
                ...
                = 2^k T(2^(k-k) + k 2^k + 2^k - 1
                = n + n lg n + n - 1        (Since 2^k = n)
                = n lg n + 2n - 1

So we are convinced that Merge Sort has Θ(n lg n).
I personally think that this is a very useful way to analyse recursive codes, so I will explore more kind of recursive sorting code to really understand the unwinding method.
I'll blog again next week.
- Brian out.