4. A certain computer algorithm used to solve very complicated differential equations uses an iterative method. That is, the algorithm solves the problem the first time very approximately, and then uses that first solution to help it solve the problem a second time just a little bit better, and then uses that second solution to help it solve the problem a third time just a little bit better, and so on. Unfortunately, each iteration (each new problem solved by using the previous solution) takes a progressively longer amount of time. In fact,

the amount of time it takes to process the k-th iteration is given by T(k) = 1.2k
seconds.

A. Use a definite integral to approximate the time (in hours) it will take the computer algorithm to run through 60 iterations. (Note that T(k) is the amount of time it takes to process just the k-th iteration.) Explain your reasoning.

B. The maximum error in the computer's solution after k iterations is given by Error = 2k^-2. Approximately how long (in hours) will it take the computer to process enough iterations to reduce the maximum error to below 0.0001?

2 answers

A: 1/3600 ∫[0,60] T(k) dk
see what you can do with that
time for 60 iterations
= ∫ 1.2k dk from 0 to 60
= [.6k^2] from 0 to 60
= .6(60^2) - 0 = 2160 seconds

2k^-2 < .0001 , assuming k is still in seconds
k^-2 < .00005
k^2 < 1/.00005
k^2 < 20,000
k < 141.42 = appr .039 hours
if 140 seconds, error = 2(140^-2) = .00010204 , which is > .0001
if 141 seconds, error = 2(141^-2) = .0000502.. , which is < .0001