Does .999...=1?

<p>Quote:
“.999…IS a real number, because it is equal to 1, which is a real number.”</p>

<p>eeer…isn’t it a…mm…sort of an assumption (sp?)
i mean, i dont believe that .999…is equal to 1, so for me the above post makes no sence :stuck_out_tongue: (sry, River, i told ya im dumb ;))</p>

<p>ok…lets say .999…is equal to 1
that means (and you said it) that .999…and 1 are “both representations of the same real number”
that gives me a thought that if there is two representations of one real number, there might be 3, 4, 10, 100, “infinity” representations of the same real number (1, in this case)…which makes me think that it could lead to some crazy things like 1=0 or something like that…he-he</p>

<p>Is there a proof that ANY real number can only have two representations?</p>

<p>Actually, I believe there is a whole branch of mathematics that deals with topics such as 1.00000…1 == 1.0 == .9999…</p>

<p>There is nothing to believe about .999… == 1.0. There is a definitive proof available.</p>

<p>Read my post on the other thread.</p>

<p>Each real number has one or more decimal representation.</p>

<p>.999… = 1 = 1.0 = 1.00 = 1.000</p>

<p>limit yes, number no, everyone is putting too much thought into this</p>

<p>Mborohovski, on the other thread you said “correct me if I’m wrong”. Yes, you are wrong - or last least in this case what you said doesn’t prove anything. 0.999… is defined as a limit, and that’s all there is to it. 0.999… = 1 for the same reason that 0.333… = 1/3, and it follows directly from the definition. You will fully understand if you are able to read through and understand the entire thread I linked. It’s somewhat interesting, but once you understand - it becomes so trivial that it isn’t even worth mentioning.</p>

<p>2bad4u, what is it that you are trying to say? That 0.999… is a limit but not a number? If so, then You are Wrong. 0.999… is a real number and represents the same real number as 1. I might as well say that 1 is not a number and it is just a mark on a page thats sort of straight and has a little dash on the bottom.</p>

<p>So does 1.0000…0001 = 1?</p>

<p>Durran, there is no such number in the real number system. First define the number 1.000…1. You could define it as a pair (1.0,1) which means 1.0 followed by infinity 0s, followed by 1. On the other hand, 0.999… is a well defined real number and is the same real number as 1. You might say that 1.000…1 = 1, but only because 1.000…1 is not a number: 1.0 followed by infinity 0s is 1. It never stops - there is no place to put a one on the end. Unless, as I say, you extend the real number system in which case you can call a number whatever you want - then it may be defined as a transfinite number which is greater than 1 and less than any other real number which is greater than one. That would not help you to generalize about the real number 1, however.</p>

<p>I should say that infinite decimals are well defined. 0.000…1 is not defined, because the notation … implies an infinite number of digits - so there can be no 1 at the end. But we could define it and use it to extend the real number system. In that case, certainly it would be equivalent to 0. Just in my example above where we do something like define 0.000…1 as a number that is greater than 0 but less than every other positive number, this is a useless definition as I stated.</p>

<p>I will bite also… To help you decide whether this is worth paying attention to, I’ll state my qualifications: I’m a junior and a pure math major at Caltech; I’ve taken three courses in advanced real and complex analysis, in which questions of this sort are treated.</p>

<p>Anyway, usually questions like this that come up on non-mathematical forums are nonsense, but this one is actually not. The answer is yes.</p>

<p>There is a formal definition of decimal expansions. If w is the integer part of the number, and d_k is the kth digit after the decimal point in base 10, then an unterminating decimal is defined in standard analysis to be</p>

<p>w + \sum<em>{i=1}^\infty d</em>k/10^k.</p>

<p>For a decimal expansion to be well-defined, there must be a function from the natural numbers {1,2,3,…} to the digits {0,1,2,3,4,5,6,7,8,9} assigning to each place k a digit d_k. </p>

<p>A decimal expansion is said to terminate if there is an integer K such that for all k > K, d_k=0.</p>

<p>When w=0 and d_k = 9 for all k, easy calculus shows that this is a geometric series equal to 1. </p>

<p>As an aside, it is pretty easy to prove that for any set {d_k}, the sum is well-defined. (All non-terminating decimals converge.) However, it is also true there are real numbers that have multiple decimal expansions in base 10. The multiplicative identity is equivalently represented by .999… and by 1. No real number has more than two equivalent decimal expansions. There is a proof, which is easy to construct (a good exercie) but longer than one or two lines… feel free to ask if you want to see it. </p>

<p>In fact, it turns out that there are two expansions if and only if one of them terminates; thus, the set of real numbers with two decimal expansions has measure zero on the real line, which means in some sense that it is very small.</p>

<p>And to answer Hriundeli’s worry, the existence of two decimal representations for some number does not create any paradoxical results, so no need to worry.</p>

<p>To answer this nonsense about .0000 with a 1 infinitely far away, that decimal expansion is not well-defined (see above for definition) since there is no way to make that into a function from the natural numbers {1,2,3,…} to the digits {0,1,2,3,4,5,6,7,8,9}. I will ask you, what is the natural number in the domain which gives the image 1, and you will not be able to answer, showing that your decimal expansion does not have 1 in any place at all. You cannot say “the infinitely large natural number” is the element of the domain that maps to 1, because there is no maximal natural number – this follows direclty from Peano’s axioms.</p>

<p>So, it is true that there is no need to form “beliefs” about this, since in the standard mathematical framework of real analysis there is a clear provable answer, namely: yes, 1.00 does equal .9999999…</p>

<p>If someone has alternative definitions for infinite decimal expansions different from the standard ones, please cite at least one textbook or scholarly article using that definition.</p>

<p>Hooray! I win! I am right… actually, so were most of you. </p>

<p>Anyway, Ben, I read through your post pretty quickly, so I wanted to clear this up: What you are saying is that you can’t have .00…1 because you can’t write it using infinite series of natural numbers?</p>

<p>If so, that means I have to think about stuff all over again.</p>

<p>first of all .999… is hyperreal number, not real</p>

<p>and then… assume .999…=1.0000000…
that means that 1.9999…=2.0000000… and so on</p>

<p>so…3.9999…/1.999999… wil be equal to 2.0000000
we can go further and say that
119.999…/4.99999…/3.99999…/2.9999…/1.9999…=1.000…</p>

<p>take even bigger number…you see, each time you divide, the gap between n and (n-1).9999999… becomes bigger and bigger</p>

<p>my point is that we ASSUME .999…=1 because as the number of 9’s increases the destinction between those two numbers becomes undistictable(sp?), BUT it never reaches 1.0000…</p>

<p>as it was said above, .9999…will reach 1.0000…at infinity, but since there is no such thing in real life as infinity, .999…wil never “hit” 1.0000…</p>

<p>infinity is an assumption, that’s why .999…=1.000… is also an assumption</p>

<p>Ben, the number 1.000…1 is definable, it is just not a real number in that it doesn’t have a standard decimal expansion. I think this is all you are saying - but I could easily define a field in which the numbers 1.000…1, 1.000…2, etc. make sense and are well-defined.</p>

<p>EDIT: I will give an example just as a curiosity:</p>

<p>Let (a,b) be the number a followed by infinity 0’s and then the number b. For example, 1.000…1 = (1, 1) and 38.12000…32 = (38.12, 32). Where arithmetic operations are defined in the traditional way on ordered pairs. This makes perfect sense mathematically, although it is only a curiosity.</p>

<p>so if you were to define 1.000…1 = 1 and 1.000…2 = 1, then how would you deal with the question is 1.000…1 greater than 1.000…2?</p>

<p>See my edited post. I could do this in any number of ways. The way which makes sense to me is to first compare a1 and a2, and then to compare b1 and b2, if we have (a1,b1) and (a2,b2). Or we could use magnitudes. It doesn’t much matter, but the former makes the most sense if we are to stay close to the normal metric which is used for real numbers.</p>

<p>So 1.0000…2 > 1.0000…1
But 2.0000…1 > 1.0000…2</p>

<p>sorry Ben</p>

<p>began writing this post before you posted yours, but yeah, </p>

<p>a) i am interested in the proof about 2 representations and
b) i still think that if it wasnt for infinity, .999…would never reach 1.000… (i said i’m dumb lol)</p>

<p>btw, could you send me the proof at:</p>

<p><a href="mailto:hriundel88■■■■■■.com">hriundel88■■■■■■.com</a>
thanks in advance</p>

<p>Infinity is just a conceptual thing, it doesn’t matter to the problem at all. If I am measuring the length of something, the value will be some sort of an infinite decimal (since lengths are exact). This does not mean that for that length to make sense, we require some little math gnome to actually enumerate all finity digits. Likewise, 0.999… is not just 0.9 with a math gnome adding a 9 on to it infinity times, 0.999… is just 0.999…, and we can express it directly mathematically in multiple ways.</p>

<p>We do not write 0.99999999 and keep going drawing 9’s forever, we use the elipsis ‘…’, which merely expresses a concept.</p>

<p>exactly
and one example would be
.999…=.9+.09+.009+… which for me is just keep adding 9’s :P</p>

<p>Hrundeli, you are wrong. There is no assumption. Infinity is not an assumption. Infinity is not a real number. It does not matter to this problem that there is no such thing as an infinitely large apple, for example. 0.9999… = 1 is provable directly from the definition of 0.9999…</p>

<p>(largely paraphrased from the thread I gave at the beginning and everybody should have read)
If you have heard about hyperreals (and therefore presumably non-standard analysis), and I guess the completeness property and the archimedean property, but do not understand the meaning of 0.999… .
The definition of 0.999… is [sum]n=1,infinity (9*10^-n). How can you claim that this convergent series is not real?</p>

<p>The idea that [0.999… is a hyperreal number, not real] sounds apealling, but only because only us severe math wonks have ever even heard of hyperreals. </p>

<p>The idea is that hyperreal numbers extend the reals, including in “infinitely small” numbers. That’s it! we say: 1 - 0.999… is one of these infinitely small numbers, rather than 0! </p>

<p>Sorry, but it ain’t so. By definition 0.999… = limit n-inf {0.999…(n 9s)}. Each element of the sequence is a real number. By the definition of limits, the limit of a sequence of real numbers is a real number. So 0.999… is real, not hyperreal. </p>

<p>No matter what larger set of numbers we decide to work in, decimal notation is defined for real numbers, and it is in the real numbers alone that this matter is settled.</p>