This seems like a great service:
https://ting.com
Refreshing pricing plans.
Saturday, January 26, 2013
Thursday, January 24, 2013
Fender
A big animal (dog?), sadly, leaped over the median this evening on our way home and collided with our car. I think it struck a glancing blow, so I hope everything turns out all right...it was not a place to stop, unfortunately, and in the dark of night I did not get a good sense of what fully happened. I only remember something leaping over a median as we drove uphill...thinking, ah, I hope that represents a garbage bag...seeing the leaping outstretched legs, and a glancing thud.
Unfortunately for our car, the animal shattered the driver-side turn-signal cover and dented and bent the driver-side fender. The driver-side door will not open, so I will likely need to get a sense of how to repair it in the near future. The headlamp I can manage easily enough on my own.
UPDATE: visited Pick 'n Pull today and purchased a replacement driver-side turn signal assembly. Some other car repairs today: replaced the driver-side main bulb; swapped out wiper blades; re-installed cabin kick panels; replaced worn clutch pedal cover; sprayed lithium grease on the door hinges and automatic seat belt tracks; cleaned the oxidized plastic of the headlamp assemblies; added door ajar switch cover to the passenger side door. I also straightened out the driver-side fender near the door so I could open the door again. Still have a salad-plate sized dent in the driver-side fender, but I can wait on that for a moment until I have some time to take it in to a body shop. I briefly considered visiting all four local Pick 'n Pull locations today, but decided I did not want to spend the several hours it would take to do so.
Also: the car passed 225,000 miles today : o )
Reminder: double-check the last time I changed the timing belt : o |
UPDATE: the nice thing about keeping an AUTO binder of all repair work: I last changed the timing belt in July 2010, at mileage 188,000. So, will have to change when it gets to about 250,000 miles.
UPDATE: visited Pick 'n Pull today and purchased a replacement driver-side turn signal assembly. Some other car repairs today: replaced the driver-side main bulb; swapped out wiper blades; re-installed cabin kick panels; replaced worn clutch pedal cover; sprayed lithium grease on the door hinges and automatic seat belt tracks; cleaned the oxidized plastic of the headlamp assemblies; added door ajar switch cover to the passenger side door. I also straightened out the driver-side fender near the door so I could open the door again. Still have a salad-plate sized dent in the driver-side fender, but I can wait on that for a moment until I have some time to take it in to a body shop. I briefly considered visiting all four local Pick 'n Pull locations today, but decided I did not want to spend the several hours it would take to do so.
Also: the car passed 225,000 miles today : o )
Reminder: double-check the last time I changed the timing belt : o |
UPDATE: the nice thing about keeping an AUTO binder of all repair work: I last changed the timing belt in July 2010, at mileage 188,000. So, will have to change when it gets to about 250,000 miles.
Thursday, January 17, 2013
Introduction to Algorithms, 3rd edition - Chapter 2
Switching to the third edition, just because.
Exercises
2.1-1 Using Figure 2.2 as a model, illustrate the operation of INSERTION-SORT on the array = < 31, 41, 59, 26, 41, 58 >
Note: solution produced via free tool LucidChart
2.1-2 Rewrite the INSERTION-SORT procedure to sort into nonincreasing instead of nondecreasing order.
2.1-3 Consider the searching problem:
Loop invariant: At the start of each iteration of the while loop of lines 2-5, the algorithm has yet to find value v in subarray A[1..i - 1], which represents the searched elements of array A.
Initialization: We start by showing the loop invariant holds before the first loop iteration, when index i = 1. Since the algorithm has yet to begin searching for value v, subarray A[1..i - 1] correctly consists of zero elements and therefore cannot contain value v. Therefore, the loop invariant holds prior to the first iteration of the loop.
Maintenance: Next, we tackle the second property: showing each iteration maintains the loop invariant. The body of the while loop tests whether value v equals array element A[i] (line 3). If so, it exits with an output of index i. Otherwise, incrementing i for the next iteration of the while loop preserves the loop invariant, as the algorithm will now have fruitlessly searched subarray A[1..i - 1] for value v.
Termination: Finally, we examine what happens when the loop terminates. Condition i > A.length = n causes the while loop to terminate. Since each loop iteration increases i by 1, we must have i = n + 1 at that time. Substituting n + 1 for i in the wording of the loop invariant, we have the subarray A[1..n] consisting of the searched elements of array A. Observing subarray A[1..n] represents the entire array, we conclude v does not exist in A. At this point, the algorithm returns the special value NIL. Hence, the algorithm is correct.
2.1-4 Consider the problem of adding two n-bit binary integers, stored in two n-element arrays A and B. The sum of the two integers should be stored in binary form in an (n + 1)-element array C. State the problem formally and write pseudocode for adding the two integers.
Input: A sequence of n binary numbers A = < a1, a2, ..., an > and B = < b1, b2, ..., bn >, representing binary integers A and B, with binary numbers a1 and b1 representing the least-significant bits of each sequence, respectively
Output: A sequence of n + 1 binary numbers C = < c1, c2, ..., cn+1 > representing the sum of binary integers A + B, with binary number c1 representing the least-significant bit
2.2-1 Express the function n^3 / 1000 - 100n^2 - 100n + 3 in terms of Θ-notation
Θ(n^3)
2.2-2 Consider sorting n numbers stored in array A by first finding the smallest element of A and exchanging it with the element in A[1]. Then find the second smallest element of A, and exchange it with A[2]. Continue in this manner for the first n - 1 elements of A. Write pseudocode for this algorithm, which is known as selection sort. What loop invariant does this algorithm maintain? Why does it need to run for only the first n - 1 elements, rather than for all n elements? Give the best-case and worst-case running times of selection sort in Θ-notation.
Loop invariant: At the start of each iteration of the for loop of lines 1-10, the subarray A[1..i - 1] consists of the i - 1 smallest elements of A, in sorted ascending order.
It only needs to run for the first n - 1 elements of A because upon termination, index i will equal (A.length - 1) + 1 = n. The loop invariant guarantees subarray A[1..n - 1] will consist of the n - 1 smallest elements of A in sorted ascending order, which implies the remaining original element not only resides in A[n], but it also represents the nth smallest element of A.
Best and worst case: SELECTION-SORT(A) performs the same, in terms of Θ-notation, regardless of input. For example, the algorithm executes the same number of steps with a sorted or reverse-sorted array as input.
The algorithm runs in Θ(n^2): The outer loop takes n - 1 iterations. The inner loop takes
iterations. Together, both outer and inner take
which equals a running time of Θ(n^2).
2.2-3 Consider linear search again (see 2.1-3). How many elements of the input sequence need to be checked on the average, assuming the element being searched for is equally likely to be any element in the array? How about in the worst case? What are the average-case and worst-case running times of linear search in Θ-notation? Justify your answers.
On the average, assuming the element being searched for is equally likely to be any element in the array, LINEAR-SEARCH(A, v) will check the i-th element of input sequence
So, on the average, assuming the element being searched for is equally likely to be any element in the array, LINEAR-SEARCH(A, v) will check
elements of the input sequence, or a little over one-half of all elements.
In the worst case, when value v matches the last element in A (or matches no elements in A), LINEAR-SEARCH(A, v) will check all elements in A.
Therefore, both on the average and in the worst case, LINEAR-SEARCH(A, v) runs in Θ(n) time. As n represents the leading term in each case, given large enough array sizes, it dominates the order of growth calculation.
2.2-4 How can we modify almost any algorithm to have a good best-case running time?
We can modify almost any algorithm to have a good best-case running time by ensuring it runs in constant time for at least one input.
2.3-1
Using Figure 2.4 as a model, illustrate the operation of merge sort on the array A = <3, 41, 52, 26, 38, 57, 9, 49>
Note: solution produced via free tool LucidChart
2.3-2
Rewrite the MERGE procedure so it does not use sentinels, instead stopping once either array L or R has had all its elements copied back to A and then copying the remainder of the other array back into A.
MERGE(A, p, q, r)
11. while nsub1 > 0 and nsub2 > 0
12. if L[i] <= R[i]
13. A[k] = L[i]
14. i = i + 1
15. k = k + 1
16. nsub1 = nsub1 - 1
17. else A[k] = R[j]
18. j = j + 1
19. k = k + 1
Exercises
2.1-1 Using Figure 2.2 as a model, illustrate the operation of INSERTION-SORT on the array = < 31, 41, 59, 26, 41, 58 >
Note: solution produced via free tool LucidChart
2.1-2 Rewrite the INSERTION-SORT procedure to sort into nonincreasing instead of nondecreasing order.
INSERTION-SORT(A)
1. for j = 2 to A.length
2. key = A[j]
3. // Insert A[j] into the sorted sequence A[1..j - 1].
4. i = j - 1
5. while i > 0 and A[i] < key
6. A[i + 1] = A[i]
7. i = i - 1
8. A[i + 1] = key
2.1-3 Consider the searching problem:
Input: A sequence of n numbers A = < a1, a2, ..., an > and a value v.Write pseudocode for linear search, which scans through the sequence, looking for v. Using a loop invariant, prove your algorithm is correct.
Output: An index i such that v = A[i] or the special value NIL if v does not appear in A.
LINEAR-SEARCH(A, v)
1. i = 1
2. while i <= A.length
3. if v == A[i]
4. return i
5. i = i + 1
6. return NIL
Loop invariant: At the start of each iteration of the while loop of lines 2-5, the algorithm has yet to find value v in subarray A[1..i - 1], which represents the searched elements of array A.
Initialization: We start by showing the loop invariant holds before the first loop iteration, when index i = 1. Since the algorithm has yet to begin searching for value v, subarray A[1..i - 1] correctly consists of zero elements and therefore cannot contain value v. Therefore, the loop invariant holds prior to the first iteration of the loop.
Maintenance: Next, we tackle the second property: showing each iteration maintains the loop invariant. The body of the while loop tests whether value v equals array element A[i] (line 3). If so, it exits with an output of index i. Otherwise, incrementing i for the next iteration of the while loop preserves the loop invariant, as the algorithm will now have fruitlessly searched subarray A[1..i - 1] for value v.
Termination: Finally, we examine what happens when the loop terminates. Condition i > A.length = n causes the while loop to terminate. Since each loop iteration increases i by 1, we must have i = n + 1 at that time. Substituting n + 1 for i in the wording of the loop invariant, we have the subarray A[1..n] consisting of the searched elements of array A. Observing subarray A[1..n] represents the entire array, we conclude v does not exist in A. At this point, the algorithm returns the special value NIL. Hence, the algorithm is correct.
2.1-4 Consider the problem of adding two n-bit binary integers, stored in two n-element arrays A and B. The sum of the two integers should be stored in binary form in an (n + 1)-element array C. State the problem formally and write pseudocode for adding the two integers.
Input: A sequence of n binary numbers A = < a1, a2, ..., an > and B = < b1, b2, ..., bn >, representing binary integers A and B, with binary numbers a1 and b1 representing the least-significant bits of each sequence, respectively
Output: A sequence of n + 1 binary numbers C = < c1, c2, ..., cn+1 > representing the sum of binary integers A + B, with binary number c1 representing the least-significant bit
BINARY-SUM(A, B, C)
1. carry = 0
2. for i = 1 to A.length
3. if (A[i] + B[i] + carry) == 3
4. carry = 1
5. C[i] = 1
6. elseif (A[i] + B[i] + carry) == 2
7. carry = 1
8. C[i] = 0
9. elseif (A[i] + B[i] + carry) == 1
10. carry = 0
11. C[i] = 1
12. else
13. carry = 0
14. C[i] = 0
15. C[i] = carry
2.2-1 Express the function n^3 / 1000 - 100n^2 - 100n + 3 in terms of Θ-notation
Θ(n^3)
2.2-2 Consider sorting n numbers stored in array A by first finding the smallest element of A and exchanging it with the element in A[1]. Then find the second smallest element of A, and exchange it with A[2]. Continue in this manner for the first n - 1 elements of A. Write pseudocode for this algorithm, which is known as selection sort. What loop invariant does this algorithm maintain? Why does it need to run for only the first n - 1 elements, rather than for all n elements? Give the best-case and worst-case running times of selection sort in Θ-notation.
SELECTION-SORT(A)
1. for i = 1 to A.length - 1
2. min_index = i
3. // Find the smallest number in unsorted subarray A[i + 1..n]
4. for j = i + 1 to A.length
5. if A[j] < A[min_index]
6. min_index = j
7. // Exchange current and smallest unsorted element
8. key = A[i]
9. A[i] = A[min_index]
10. A[min_index] = key
Loop invariant: At the start of each iteration of the for loop of lines 1-10, the subarray A[1..i - 1] consists of the i - 1 smallest elements of A, in sorted ascending order.
It only needs to run for the first n - 1 elements of A because upon termination, index i will equal (A.length - 1) + 1 = n. The loop invariant guarantees subarray A[1..n - 1] will consist of the n - 1 smallest elements of A in sorted ascending order, which implies the remaining original element not only resides in A[n], but it also represents the nth smallest element of A.
Best and worst case: SELECTION-SORT(A) performs the same, in terms of Θ-notation, regardless of input. For example, the algorithm executes the same number of steps with a sorted or reverse-sorted array as input.
The algorithm runs in Θ(n^2): The outer loop takes n - 1 iterations. The inner loop takes
iterations. Together, both outer and inner take
where c equals the number of incidental, non-comment lines of code in SELECTIOTN-SORT. This simplifies to
which equals a running time of Θ(n^2).
2.2-3 Consider linear search again (see 2.1-3). How many elements of the input sequence need to be checked on the average, assuming the element being searched for is equally likely to be any element in the array? How about in the worst case? What are the average-case and worst-case running times of linear search in Θ-notation? Justify your answers.
On the average, assuming the element being searched for is equally likely to be any element in the array, LINEAR-SEARCH(A, v) will check the i-th element of input sequence
A.length - (i - 1)times. For example, if A.length == 100 and we call LINEAR-SEARCH(A, v) 100 times with the same inputs, it will, on the average, check element #1 100 times, element #2 99 times, element #3 98 times, and so forth, checking element #100 only 1 time. So, on average, we will check
(A.length + A.length-1 + A.length-2 + ... + 1) / A.lengthor
So, on the average, assuming the element being searched for is equally likely to be any element in the array, LINEAR-SEARCH(A, v) will check
elements of the input sequence, or a little over one-half of all elements.
In the worst case, when value v matches the last element in A (or matches no elements in A), LINEAR-SEARCH(A, v) will check all elements in A.
Therefore, both on the average and in the worst case, LINEAR-SEARCH(A, v) runs in Θ(n) time. As n represents the leading term in each case, given large enough array sizes, it dominates the order of growth calculation.
2.2-4 How can we modify almost any algorithm to have a good best-case running time?
We can modify almost any algorithm to have a good best-case running time by ensuring it runs in constant time for at least one input.
2.3-1
Using Figure 2.4 as a model, illustrate the operation of merge sort on the array A = <3, 41, 52, 26, 38, 57, 9, 49>
Note: solution produced via free tool LucidChart
2.3-2
Rewrite the MERGE procedure so it does not use sentinels, instead stopping once either array L or R has had all its elements copied back to A and then copying the remainder of the other array back into A.
MERGE(A, p, q, r)
1. nsub1 = q - p + 1
2. nsub2 = r - q
3. let L[1..nsub1] and R[1..nsub2] be new arrays
4. for i = 1 to nsub1
5. L[i] = A[p + i - 1]
6. for j = 1 to nsub2
7.
R[j] = A[q + j]8. i = 1
9. j = 1
10. k = 1
11. while nsub1 > 0 and nsub2 > 0
12. if L[i] <= R[i]
13. A[k] = L[i]
14. i = i + 1
15. k = k + 1
16. nsub1 = nsub1 - 1
17. else A[k] = R[j]
18. j = j + 1
19. k = k + 1
20. nsub2 = nsub2 - 1
2.3-3
Use mathematical induction to show that when n is an exact power of 2, the solution of the recurrence
is T(n) = n lg n (where lg = log base 2).
Base case: Let n = 2. Then T(n) = 2, which equals T(n) = n lg n, as 2*log2(2) = 2*1 = 2.
Inductive step: Assuming T(n) holds, for some unspecified value of n where n = and k > 1, we must show T(2n) holds, as 2n represents the next valid input. In this case, .
We want to show
Our proof only concerns cases in which n represents a multiple of 2. Specifically, the case in which . Substituting for 2n and for n allows us to rewrite the formula as
Simplifying via logarithmic identify reduces this to:
thereby showing inductive step T(2n) holds.
Since both the basis and the inductive step have been proved, we have therefore proved T(n) holds for all n where n = and k >= 1. Q.E.D.
2.3-4
We can express insertion sort as a recursive procedure as follows. In order to sort A[1..n], we recursively sort A[1..n - 1] and then insert A[n] into the sorted array A[1..n - 1]. Write a recurrence for the running time of this recursive version of insertion sort.
1. // Insert el into the sorted sequence A[1..sorted_end].
2. // Assume total buffer equals A[1..sorted_end + 1], with
3. // el initially at A[sorted_end + 1]
4. i = sorted_end
5. while i > 0 and A[i] > el
6. A[i + 1] = A[i]
7. i = i - 1
8. A[i + 1] = el
2.3-5
Referring back to the searching problem (see 2.1-3), observe that if the sequence A is sorted, we can check the midpoint of the sequence against v and eliminate half of the sequence from further consideration. The binary search algorithm repeats this procedure, halving the size of the remaining portion of the sequence each time. Write pseudocode, either iterative or recursive, for binary search. Argue represents the worst-case running time of binary search.
The iterative case:
We can express the recursive case with recurrence:
We can construct a recursion tree to see why BINARY-SEARCH runs in worst-case lg n time. In (a) and (b), above, we see T(n) progressively expanding. In (c), we see the fully expanded tree, which has lg n + 1 levels and each level contributes a total cost of c. The total cost, therefore, is c lg n, which is .
2.3-6
Observe that the while loop of lines 5-7 of the INSERTION-SORT procedure in Section 2.1 uses a linear search to scan (backward) through the sorted subarray A[1..j - 1]. Can we use a binary search (see Exercise 2.3-5) instead to improve the overall worst-case running time of insertion sort to ?
INSERTION-SORT(A)
1. for j = 2 to A.length
2. key = A[j]
3. // Insert A[j] into the sorted sequence A[1..j - 1].
4. i = j - 1
5. while i > 0 and A[i] < key
6. A[i + 1] = A[i]
7. i = i - 1
8. A[i + 1] = key ,'''''''''''''''''''
Previous - Chapter one
Next - Chapter three
21. if nsub1 > 0
22. do
23. A[k] = L[i]
24. k = k + 1
25. i = i + 1
26. nsub1 = nsub1 - 1
27. while nsub1 > 0
28. else
29. do\
30. A[k] = R[j]
31. k = k + 1
32. j = j + 1
33. nsub2 = nsub2 - 1
34. while nsub2 > 0
30. A[k] = R[j]
31. k = k + 1
32. j = j + 1
33. nsub2 = nsub2 - 1
34. while nsub2 > 0
2.3-3
Use mathematical induction to show that when n is an exact power of 2, the solution of the recurrence
is T(n) = n lg n (where lg = log base 2).
Base case: Let n = 2. Then T(n) = 2, which equals T(n) = n lg n, as 2*log2(2) = 2*1 = 2.
Inductive step: Assuming T(n) holds, for some unspecified value of n where n = and k > 1, we must show T(2n) holds, as 2n represents the next valid input. In this case, .
We want to show
This simplifies to
Since we assume T(n) holds, we substitute in n lg(n) for T(n), which results in
Via associative and distributive properties, this simplifies to
Our proof only concerns cases in which n represents a multiple of 2. Specifically, the case in which . Substituting for 2n and for n allows us to rewrite the formula as
Simplifying via logarithmic identify reduces this to:
thereby showing inductive step T(2n) holds.
Since both the basis and the inductive step have been proved, we have therefore proved T(n) holds for all n where n = and k >= 1. Q.E.D.
2.3-4
We can express insertion sort as a recursive procedure as follows. In order to sort A[1..n], we recursively sort A[1..n - 1] and then insert A[n] into the sorted array A[1..n - 1]. Write a recurrence for the running time of this recursive version of insertion sort.
INSERTION-SORT(A, n)
1. if n > 2
2. INSERTION-SORT(A, n - 1)
3. INSERT(A,
n - 1, A[n])
INSERT(A, sorted_end, el)1. // Insert el into the sorted sequence A[1..sorted_end].
2. // Assume total buffer equals A[1..sorted_end + 1], with
3. // el initially at A[sorted_end + 1]
4. i = sorted_end
5. while i > 0 and A[i] > el
6. A[i + 1] = A[i]
7. i = i - 1
8. A[i + 1] = el
As with the non-recursive insertion sort, a reverse-sorted array represents the worst-case scenario, while a sorted array represents the best-case scenario.
2.3-5
Referring back to the searching problem (see 2.1-3), observe that if the sequence A is sorted, we can check the midpoint of the sequence against v and eliminate half of the sequence from further consideration. The binary search algorithm repeats this procedure, halving the size of the remaining portion of the sequence each time. Write pseudocode, either iterative or recursive, for binary search. Argue represents the worst-case running time of binary search.
The iterative case:
BINARY-SEARCH(A, v)
1. l = 1
2. r = A.length
3. while l != r
4.
mid = l + CEILING((r - l) / 2)
5. if v == A[mid]
6. return mid
6. return mid
7. elseif v > A[mid]
8. l = mid
9. else
10. r = mid
11. return NIL
The recursive case:
BINARY-SEARCH(A, v)
1. return
SEARCH(A, v, 1, A.length)
SEARCH(A, v, l, r)
1. if l == r
2. return NIL
3. else
4.
mid = l + CEILING((r - l) / 2)
5. if v == A[mid]
6. return mid7. elseif v > A[mid]
8. return SEARCH(A, v, mid, r)
9. else
10.
return SEARCH(A, v, l, mid)
We can express the recursive case with recurrence:
where the constant c represents the time required to solve problems of size 1.
Note: solution produced via free tool LucidChart
We can construct a recursion tree to see why BINARY-SEARCH runs in worst-case lg n time. In (a) and (b), above, we see T(n) progressively expanding. In (c), we see the fully expanded tree, which has lg n + 1 levels and each level contributes a total cost of c. The total cost, therefore, is c lg n, which is .
2.3-6
Observe that the while loop of lines 5-7 of the INSERTION-SORT procedure in Section 2.1 uses a linear search to scan (backward) through the sorted subarray A[1..j - 1]. Can we use a binary search (see Exercise 2.3-5) instead to improve the overall worst-case running time of insertion sort to ?
INSERTION-SORT(A)
1. for j = 2 to A.length
2. key = A[j]
3. // Insert A[j] into the sorted sequence A[1..j - 1].
4. i = j - 1
5. while i > 0 and A[i] < key
6. A[i + 1] = A[i]
7. i = i - 1
8. A[i + 1] = key ,'''''''''''''''''''
Previous - Chapter one
Next - Chapter three
Saturday, January 12, 2013
Free digital copy of movie after viewing at the theater
Why not offer people a free digital copy of the movie they just watched at the theater?
Radio songs
Heard on KPFA 94.1 FM out of Berkeley, CA on Saturday, Jan 5:
Julia Holter - Marienbad
http://www.youtube.com/watch?v=QukVgY8I_nA
ORQUESTA LA MODERNA TRADICION's "Juarez" from their album Goza Con Migo on the Outman Records label.Heard on KDVS 90.3 FM out of Davis, CA on Wednesday, Jan 9:
Julia Holter - Marienbad
http://www.youtube.com/watch?v=QukVgY8I_nA
Dash camera
Thinking about getting one, if for nothing else than the beautiful sights of driving in California.
Wednesday, January 09, 2013
Installing Ubuntu to Dell Inspiron 15r
Lots of fun (ha ha, joking) this evening installing a copy of Ubuntu to a Dell Inspiron 15r via USB:
- USB partition must be FAT16 to avoid boot failure
- Use 1GB (of 8GB) partition to avoid weird "out of disk space" error when writing the ISO to the USB
- Have to reinstall GRUB after the install
Oy.
Subscribe to:
Posts (Atom)