# Welcome to College Confidential!

## The leading college-bound community on the web

Join for FREE, and start talking with other members, weighing in on community discussions, and more.

Also, by registering and logging in you'll see fewer ads and pesky welcome messages (like this one!)

### As a CC member, you can:

• Post reviews of your campus visits.
• Find hundreds of pages of informative articles.
• Search from over 3 million scholarships.

# Anybody else do a math project for a science fair?

Registered User Posts: 321 Member
I have an idea for a math project to do for a science fair, but I feel like math projects are difficult to do because it's hard to do something original or have an experiment in place. I already have an idea in mind but I'm afraid it'll be too dry or boring. I'm not really sure of any practical applications of my idea either. I'd really like to hear from anybody who did a math project for a science fair, ideally one that's not focused on coding or programming. How did you make it interesting? And how did you keep it original?

## Replies to: Anybody else do a math project for a science fair?

• Registered User Posts: 4,747 Senior Member
edited December 2015
@DogsAndMath23 I did a math project in 11th grade, but yes as you said, coming up with original results is hard because a lot is already known. However there are still lots of unknown problems accessible to HS students, such as the twin prime conjecture (infinitely many pairs of primes differing by 2) or the Hadwiger conjecture (any n-dimensional convex body can be covered using 2^n smaller objects homothetic to the original), so you might be able to at least research recently-proven results.

Just curious, what idea are you thinking of?
• Registered User Posts: 321 Member
I've been working on a way of approximating functions under the condition that the derivative is easier to compute than the function itself. It's very much like linear approximation, except, well, not linear. Mostly I've been working on applying this to the natural logarithm, though I'm also working on extensions, like applying it to the inverse trig functions and antilogs and similar things.

Strangely, I can't find anything resembling my method online. That is, nothing that's just as simple and as accurate. With my method, I can approximate the natural log of a number from 1-1140 with an error of 0.0031 or less, just by using a simple formula. If I apply various log properties and use a lot of digits in the process (which would technically not require a calculator but probably would require one to keep it free of arithmetic errors) then I can get down to an error margin of less than 0.0005 (5x10^-4), typically.

You do need to memorize a few values for my method, or use a chart, but it's significantly less than trying to memorize log tables. It's only 14 values to memorize/refer to for 1-1140 (fewer for a smaller range), but you could also hypothetically memorize only two and derive the others-- and if you do much math you're likely to know one or both of those two already. Of course, if you can calculate the natural log then you can calculate logs of any base, although I'm still looking into the accuracy of that.

I'm pretty interested in taking this further so I feel like it would be nice to formalize it a bit and make it a science fair project, although I still fear that it'll be too dull.
• Registered User Posts: 4,747 Senior Member
@DogsAndMath23 wait, but we have the series ln (x+1) = x - x^2/2 + x^3/3 - ... where -1 < x <= 1. So to compute ln x for some large x, I feel like we could compute ln 2, store the value of ln 2 somewhere, and then keep dividing z by 2 until we obtain a number k <= 1. Am I missing something or do you have a better/simpler solution in mind? (here you only need to store ln 2, a counter containing the number of times we divided, and the value of ln k).

However this method doesn't work for most other functions since we took advantage of log properties.

I guess one project idea you could try with a different method is: given your solution to approximate ln x (or some other function) to n significant places, how fast do both solutions converge (as a function of n)? Does one overtake the other? How much memory is required in each?

• Registered User Posts: 321 Member
My method does not use the Taylor series (at least, I don't think so-- it doesn't really resemble that sequence although there are similarities). I've been researching all the methods I could find online, although I haven't gone into much depth with them yet. The Taylor series is slow to converge, isn't it? And as you said, it only works for a pretty small range of X values, though log properties can be used to get numbers into the range. I'm not sure I understand you entire approach to it, though, or how you would extend it to other numbers.

I didn't bother to use numbers below 1 because I figured log properties could be used to get it into the right form.

My method isn't a series so it seems wrong to say it's "quickly converging" but the amount of work in evaluating mine seems far less than the amount of Taylor series terms you'd have to go through and the conversions you'd have to do to get the same level of accuracy.

I can PM you more details if you'd like. (I've mentioned some of the details on a different website and don't want anybody from there to link up my accounts from here and there, I'm a bit paranoid in that way)
• Registered User Posts: 4,747 Senior Member
I'm not sure I understand you entire approach to it, though, or how you would extend it to other numbers.
@DogsAndMath Here is the method I suggest for computing ln x, where x is some large number:
1. Compute ln 2 to some degree of precision using the Taylor series (the larger x is, the more precision you want).
2. Find an integer d and a number k <= 2 such that x = 2^d * k. Basically, we are dividing x by 2 a total of d times.
3. Compute ln k.
3. Since ln x = d ln 2 + ln k, we can find ln x.
• Registered User Posts: 4,747 Senior Member
In case you're interested, here is the Python code corresponding to the above solution:

import math
def ln(x):
if x <= 2.0: return taylor(x-1)
else: return 1.0*ln2 + ln(x/2.0)

def taylor(x):
partialSum = 0
nTerms = 1000
for i in range(1,nTerms):
partialSum += ((-1)**(i+1))*(x**i)/float(i)
return partialSum

ln2 = ln(2.0)

This converged pretty slowly for me - ln(math.e) returned 1.000500249999877 and ln(math.e**5) returned 5.003501749999138. You could try experimenting with nTerms and determining how fast it converges, perhaps...
• Registered User Posts: 321 Member
Thanks, I think I understand what you're doing now. Does "nTerms = 1000" mean that it used 1000 terms of the Taylor series? I appreciate the code but I don't know how to code (I do intend to learn eventually though).
• Registered User Posts: 4,747 Senior Member
Does "nTerms = 1000" mean that it used 1000 terms of the Taylor series?
Yes - although technically 999 because Python range(m,n) makes the list m,m+1,...,n-1. Maybe I should've used range(1,nTerms+1)...

However, the advantage here is that if we store the value of ln 2, we only need to compute it once, and we only need to compute ln k once. The rest of the operations are just division by 2, and some additions.
• Registered User Posts: 71 Junior Member
I did a math project for a science fair.
This discussion has been closed.