PDA

View Full Version : 0.999... == 1: why is it disturbing?



jpkotta
January 31st, 2009, 08:21 PM
This is a common "debate" on internet forums. I tried to see if it was on these forums, but I couldn't find it. Anyway, what always surprises me is how much it bothers people, especially because equations like 0.333... == 1/3 don't seem to. I want to know why that is.

My theory is that people learn about repeating decimals in grade school. Repeating decimals come out of the division algorithm they're taught. That algorithm "is division" and whatever it produces is the right answer. The algorithm works for numbers like 1/3, but doesn't work for 1/1 (it terminates immediately instead of generating a repeating decimal). Since the algorithm fails to produce 0.999... from 1/1, the equation seems wrong and unnatural.

What do you think?

zmjjmz
January 31st, 2009, 08:39 PM
Use the sum of infinite series to find the fractional value of .999....
.999... is equal to .9 + .09 + .009 + .0009...
This can be represented as a geometric sequence, where a1 = .9 and the ratio = .1
Using the formula for infinite series (a1/(1-r)), we can see that it comes out to be (.9/(1-.1)), the denominator is 1-.1 which is .9, and .9/.9 = 1.
Happy?

jespdj
January 31st, 2009, 08:41 PM
I don't see why this should be disturbing. It's just mathematics. The trick is in the fact that the numbers repeat infinitely 0 that makes 0.99999..... mean something else than what people think.

Nepherte
January 31st, 2009, 08:41 PM
To the limit 0.999... equals 1, it feels rather natural to me.

jpkotta
January 31st, 2009, 08:45 PM
Use the sum of infinite series to find the fractional value of .999....
.999... is equal to .9 + .09 + .009 + .0009...
This can be represented as a geometric sequence, where a1 = .9 and the ratio = .1
Using the formula for infinite series (a1/(1-r)), we can see that it comes out to be (.9/(1-.1)), the denominator is 1-.1 which is .9, and .9/.9 = 1.
Happy?

I know it's true. But many people vehemently believe that's false, and it's surprising to me. I guess I'm kind of looking for a meta-discussion here.

smartboyathome
January 31st, 2009, 08:51 PM
I know it's true. But many people vehemently believe that's false, and it's surprising to me. I guess I'm kind of looking for a meta-discussion here.

So, here is why it makes sense to me. 1/infinity equals .000...1, right? and .999... is 1 - (1/infinity). Since 1/infinity is basically zero (it is the smallest number possible), then you are basically doing 1-0, which is one.

The reason people believe it is false is because they haven't gotten far enough in math. They haven't learned about limits and such, which help in understanding this.

Nepherte
January 31st, 2009, 08:51 PM
I assume it's just the mathematical education someone received.

zmjjmz
January 31st, 2009, 08:56 PM
The reason people believe it is false is because they haven't gotten far enough in math. They haven't learned about limits and such, which help in understanding this.

I don't really know anything about limits :P
I just learned the infinite series thing in trig.

smartboyathome
January 31st, 2009, 08:59 PM
I don't really know anything about limits :P
I just learned the infinite series thing in trig.

In trig, you get an introduction to limits, so you learned enough to know about this. That infinite series will help you learn limits in the future. :)

Montblanc_Kupo
January 31st, 2009, 09:00 PM
I have the answer.

42.

jpkotta
January 31st, 2009, 09:15 PM
The reason people believe it is false is because they haven't gotten far enough in math. They haven't learned about limits and such, which help in understanding this.

Maybe. I work with a guy who admittedly hates math, but he managed to get through an electrial engineering undergrad program. He has supposedly had enough math to properly understand the equation, but I cannot convince him it's true. (You don't need to give me arguments to convince him, it's a lost cause.)

Every time I see this in an web forum, there are several people who remain unconvinced, whether through ignorance, denial, trollishness, or any number of reasons.

There is something about this equation that polarizes people, even people that have had some Calculus. You don't see people screaming that sum(2^-n, n, 0, infty) == 2 is false, even though it is exactly the same thing.

-grubby
January 31st, 2009, 09:17 PM
1/3 == 0.333...
0.333... * 3 == 0.999...
0.999... == 1

one-third of one is 0.333... Multiplying that by 3 gets 0.999..., therefore 0.999... and 1 are equivalent. Doesn't seem hard to believe to me.

grappler
January 31st, 2009, 09:19 PM
I have taught in math departments for many years. In my first lecture in Calculus 101 I ask the students to vote on whether 1=0.9999... Typically the vast majority vote "no". I then give three "proofs" that they are in fact equal and at the end of the class have another vote. Still the majority vote "no". Already essentially one proof has been
mentioned (summing the infinite series) - here are two others:

1. Call x=0.99999...., and
multiply it by 10. Everyone knows that 10x=9.9999.... Now subtract 0.9999...
and we get 9x=10x-x=9. Solving for x, we obtain x=1.


2. Not really a proof but something that gets the students (hopefully) to question their stance. If indeed x is not equal to 1, then there must be some number between x and 1 - eg (1+x)/2. What is its decimal expansion?

This non-uniqueness of representation is typical of decimal-type expansions. So, for example, in binary 1= 0.11111.....

Bölvağur
January 31st, 2009, 09:30 PM
In base three:

three = 10

x = 1 / 10

( x = 0.1 )

answer = three · x
(answer = 1)


in base pi we get very interesting numbers.

*edit*
I changed the colours... too bright.

MaxIBoy
January 31st, 2009, 09:51 PM
Both of these are true, given an infinite number of decimal places.

One can derive it this way:

For any number, take the amount of place values and divide it by the same amount of nines (so 1/9, 10/99, 675/999, 1234567/9999999, etc.)
You will get that number to be a repeating decimal (so 823/999 = 0.823823823823...)
This is even true for 9/9, which is obviously equal to 1 as well as 0.999999999999...



Now, can anyone tell me why n/0 doesn't equal ∞?

MaxIBoy
January 31st, 2009, 09:57 PM
So, here is why it makes sense to me. 1/infinity equals .000...1, right? Hey, you need to talk to a certain math teacher I know...

smartboyathome
January 31st, 2009, 10:08 PM
Now, can anyone tell me why n/0 doesn't equal ∞?

Because Mathematicians have declared that you can't divide something into 0 equal parts. For example, you can't divide 1 whole pie into 0 parts, can you? At least, it makes sense to me.


Hey, you need to talk to a certain math teacher I know...

I know that 1/infinity could be zero, but technically infinity is just a symbolic representation of the largest number possible, so that would mean that 1/infinity is another representation of the smallest number possible greater than zero (also known as the limit as x approaches zero from the right of x).

GeneralZod
January 31st, 2009, 10:11 PM
I know it's true. But many people vehemently believe that's false, and it's surprising to me. I guess I'm kind of looking for a meta-discussion here.

There's a pretty good discussion about people's rejection of the fact here:

http://en.wikipedia.org/wiki/0.999#Skepticism_in_education

MaxIBoy
January 31st, 2009, 10:18 PM
Because Mathematicians have declared that you can't divide something into 0 equal parts. For example, you can't divide 1 whole pie into 0 parts, can you? At least, it makes sense to me. How do you divide something into 0.25 equal parts? 1/0.25 = 4, though. Math doesn't have to be intuitive.




I know that 1/infinity could be zero, but technically infinity is just a symbolic representation of the largest number possible, so that would mean that 1/infinity is another representation of the smallest number possible greater than zero (also known as the limit as x approaches zero from the right of x).

The word for any number over infinity is infinitesimal.

There are two ways of thinking of infinity. One way of thinking about it is like some kind of disease-- introduce it into a formula, and it permeates everything. (Try finding the trajectory of a 9-pound projectile with an infinite starting velocity at 45° above the horizontal and normal earth gravity. Pretty soon, every single number you're working with will be infinite.)

Another way of thinking of it is as the inverse of zero, which is, in fact, the largest number possible (just as zero has the smallest possible absolute value.) For the same reason that ∞/12 = ∞, 0/12 = 0.

gn2
January 31st, 2009, 10:52 PM
Infinity isn't a number, but there is an infinite number of numbers.

And if you think about it too much your head will go numb.

jpkotta
January 31st, 2009, 10:52 PM
There are two ways of thinking of infinity. One way of thinking about it is like some kind of disease-- introduce it into a formula, and it permeates everything. (Try finding the trajectory of a 9-pound projectile with an infinite starting velocity at 45° above the horizontal and normal earth gravity. Pretty soon, every single number you're working with will be infinite.)

Another way of thinking of it is as the inverse of zero, which is, in fact, the largest number possible (just as zero has the smallest possible absolute value.) For the same reason that ∞/12 = ∞, 0/12 = 0.

That involves thinking of infty as a number, which it isn't. Almost always you should be talking about limits as something increases without bound. However, once you're familiar with certain calculations, treating infty as a number is useful, as long as you keep in mind that you're really working with limits. The only time I know of to treat infty as a number is when working with transfinite numbers, but then you have different classes of infty, and those aren't real numbers anyway.

saulgoode
January 31st, 2009, 11:04 PM
Maybe. I work with a guy who admittedly hates math, but he managed to get through an electrial engineering undergrad program. He has supposedly had enough math to properly understand the equation, but I cannot convince him it's true.
Electrical engineers recognize that there is a distinction between an infinite series and a limit of an infinite converging series. If you wish to state that the notation 0.999... is defined as the limit of the series then fine; but it is erroneous to equate a decimal point followed by an infinite number of "9"s to unity (as it is also incorrect to equate 1/infinity to "0").

jpkotta
January 31st, 2009, 11:12 PM
Electrical engineers recognize that there is a distinction between an infinite series and a limit of an infinite converging series. If you wish to state that the notation 0.999... is defined as the limit of the series then fine; but it is erroneous to equate a decimal point followed by an infinite number of "9"s to unity (as it is also incorrect to equate 1/infinity to "0").

How should we interpret 0.999...? The usual interpretation decimal notation is a compact way of writing a series, and this doesn't change for repeating decimals.

Maybe this is a big reason why people hate the equation. They don't think of decimal expansions as series; they think of them as "the number" rather than a particular way to write the number.

saulgoode
January 31st, 2009, 11:27 PM
Maybe this is a big reason why people hate the equation. They don't think of decimal expansions as series; they think of them as "the number" rather than a particular way to write the number.
I would submit that they DO think of them as a series (an infinite one at that), but not a limit to which the series converges.

Would you argue that 1/infinity equates to zero?

jpkotta
January 31st, 2009, 11:45 PM
I would submit that they DO think of them as a series (an infinite one at that), but not a limit to which the series converges.

Would you argue that 1/infinity equates to zero?

AFAIK, the only way to interpret an infinite series is as a limit. I always see them defined more or less like this:

sum(a[n], n, 0, infty) := limit of s[n] as n -> infty

where

s[n] = sum(a[i], i, 0, n)
So s[n] is always a finite sum, and the series is recast as a sequence, which is easy to think about limits with. So the only way to think about it is as a limit

If 1/infty is shorthand for lim 1/n as n -> infty, then yes, it is equal to 0. But there isn't a common interpretation of 1/infty like there is for decimal expansions, at least not one that is correct. I think the difference is with 1/infty, there is an explicit infty, being used like a number. In an infinite series, it is used properly, saying that we have as many terms as we like, and we are not actually calculating an expression with an infty in it.

tom66
January 31st, 2009, 11:53 PM
Well, I have told it to various people. They didn't believe it, I told them to research it and they changed their mind. Here's a little proof that you can explain quite easily without mixing decimals and fractions:

Between any two unique numbers, there must be an infinite amount of numbers between them. Take for example, 1 and 2. Consider how many different numbers there are between those two numbers. You'll quickly find it's infinite.

However, this assertion fails for 0.999... and 1. You can't list digits between them; they don't (and can't possibly) exist, therefore, the numbers must be equal.

saulgoode
February 1st, 2009, 12:46 AM
If 1/infty is shorthand for lim 1/n as n -> infty, then yes, it is equal to 0. But there isn't a common interpretation of 1/infty like there is for decimal expansions, at least not one that is correct.
I believe that the reason why many find the equality expressed as "0.999... = 1.0" disturbing is that the chosen notation of using ellipses does not inherently include the concept of a "limit". There is an association of repetition with the ellipses, and even an implication that it is never-ending. But the step from that "common interpretation" to the mathematical definition of a "limit" being evaluated is not intuitive to the notation -- and it is precisely that arbitrary step of having the notation signify a limit which is required to make the equality valid.

Let's say I offer a new mathematical notation "+++" and then state that:


1+++ = 4

basing this on "x+++" being a function "x (+1) (+1) (+1)". This would be fairly intuitive and, given the definition I provide for the "+++" notation, the equation is correct.

However, if I instead stated that:


1+++ = 7

based on the "x+++" meaning "x (+1) (+2) (+3)" it would not be so intuitive, even though the equation, by virtue of this alternate meaning of the "+++" notation, is correct.

I wouldn't have proved any wonderful paradox; all I've done is chosen a notational meaning which by definition suits the equation.

kavon89
February 1st, 2009, 12:55 AM
.999... == 1 proof:

lim(m --> ∞) sum(n = 1)^m (9)/(10^n) = 1
0.9999... = 1

Thus x = 0.9999...
10x = 9.9999...
10x - x = 9.9999... - 0.9999...
9x = 9
x = 1.


source: http://www.blizzard.com/us/press/040401.html

PC-XT
February 1st, 2009, 01:28 AM
I see it different ways. I see how they are equal enough to call equal. I also see how sometimes they could be different enough to be almost unequal, which is a strange idea in math, though present.
For instance, 0° == 1 (that is 0 to the power of 0) as I have heard many including Donald Knuth say. Yet, in limits at least, it has different values.
It is like quantum particles. They can do seemingly impossible things if you can look small enough: They can be in two places at once in a quantum state, and somehow be in total sync or make the universe split or some other thing. Empty space must generate temporary energy and particles to follow known laws, so you really can't have a total vacuum.
Light can go faster than its speed. (If you speed it up, you also slow it down, like stretching it out, with its middle going at light speed.) Its speed is relative, so it appears to go at different speeds relative to other places, but each place sees a speed difference of light speed.
Someone above said 1/∞ == 0.000...1 and it is so close to 0 that nothing really could tell the difference. If there is no number in between, they may be considered equal, though they are different by 1/∞ (which may also be considered equal to 0.)
I don't think the 0.999... == 1 question matters enough to bother denying, except perhaps in 2/∞ of the cases. ;)
In long division, 0.999... is usually the result of something that is supposed to be 1, but this could be due to rounding error of 1/∞. 1/3 == 0.333... when rounded ends with a remainder of 1. If you multiply this by 3, it results in 0.999... ending with a remainder of 3, which then can be divided by 3 giving 1, which then starts a carry sequence bringing the result up to 1. Rounding a number to an infinite number of places would give the same number (as if it wasn't rounded at all). Then, if you say that 1/∞ == 0 then there is no rounding error, and 0.999... == 1, but if you do make a difference, it is in both. (1 - 0.999... = 1/∞)
Perhaps one could use another symbol to determine the difference between 0 and 1/∞? Perhaps === could be borrowed from variant comparison? :) This sounds like numbers have a basic unit of 1/∞, which I am not sure about, since what is (1/∞)/2? Comparison in floating point is often just a test to see if the difference is within a limit.
Actually, this particular question is in how to represent the number in decimal, which is for humans to read. As long as humans can read it to gain the understanding of the one who wrote it, then it works. Since so many people learn the numbers are equal, it can be considered safe to assume this for many audiences, but also since it is still unbelievable for so many people, it is also safe to consider that some might not believe it.

saulgoode
February 1st, 2009, 01:37 AM
lim(m --> ∞) sum(n = 1)^m (9)/(10^n) = 1
0.9999... = 1

Thus x = 0.9999...
10x = 9.9999...
10x - x = 9.9999... - 0.9999...
9x = 9
x = 1.

The notational definition of the ellipses (i.e., the "conversion" of a non-real infinite series to the real-number convergence limit of that series) is what makes that proof work. Otherwise the part I highlighted in bold is relying upon the substitution property of equality which, while proven for real numbers, has not been proven for quantities involving infinity.

jpkotta
February 1st, 2009, 03:49 AM
I believe that the reason why many find the equality expressed as "0.999... = 1.0" disturbing is that the chosen notation of using ellipses does not inherently include the concept of a "limit". There is an association of repetition with the ellipses, and even an implication that it is never-ending. But the step from that "common interpretation" to the mathematical definition of a "limit" being evaluated is not intuitive to the notation -- and it is precisely that arbitrary step of having the notation signify a limit which is required to make the equality valid.


Agreed. This goes back to my theory about the output of the division algorithm and why 0.333... is considered kosher.


The notational definition of the ellipses (i.e., the "conversion" of a non-real infinite series to the real-number convergence limit of that series) is what makes that proof work. Otherwise the part I highlighted in bold is relying upon the substitution property of equality which, while proven for real numbers, has not been proven for quantities involving infinity.

I don't see that problem. The infinite series is a limit, there is no sum with an infinite number of terms, it is a limit and nothing more. And since the series converges, it is a real number, so no problems there. It is sort of an abuse of notation, but every step has a well defined counterpart if we were working with the series instead of the decimal expansion.

yaaarrrgg
February 1st, 2009, 04:06 AM
Really there is no such thing as truth in mathematics. The most we can say is that something follows from a set of axioms or does not.

You could define a system where ".999... == 1" was true, and another system of axioms where the same statement was false.

Though you would have to be careful with the definition of the symbol "...". For example you'd have to reject "1/3 == .333..." or ".333... == .3333..." instead.

But the general idea that any mathematical truth is carved in stone (like "the angles in a triangle sum up to 180 degrees" "two parallel lines never intersect" ... etc) is only due to the fact that it was taught as a dogma rather than an axiomatic system of rules ... most of which are pulled out of thin air.

IMO at bottom, if we ask "what is mathematics?" ... it resembles computer science more than anything else. That is: getting a system of axioms to be consistent is a lot like getting a program to compile. Requires some tweaks here and there, till one can develop a consistency proof to show that at least one thing cannot be derived from the system of axioms.

My $0.1999999.... :)

igknighted
February 1st, 2009, 05:02 AM
Because Mathematicians have declared that you can't divide something into 0 equal parts. For example, you can't divide 1 whole pie into 0 parts, can you? At least, it makes sense to me.




I know that 1/infinity could be zero, but technically infinity is just a symbolic representation of the largest number possible, so that would mean that 1/infinity is another representation of the smallest number possible greater than zero (also known as the limit as x approaches zero from the right of x).

Close, but not quite. Infinity doesn't represent anything, and you can't do numeric math with it. You can, however, use it in limits (as x->infinity, for instance). There is no 1/infinity, what you really mean mathematically is 1/x, as x->infinity. Since the former doesn't exist, 1/x isn't defined at x->infinity, so you can only describe what it would do there with a limit. And in this case, that limit would approach 0.

As a bonus brainteaser, lets depart from this calculus and get to something a little more computer science related. Take the set {1,2,3,...}. We can all agree that it is a subset of {0,1,2,3,...}, as 0 is a member of the second set, but not the first. However, mathematically speaking, these sets are the same size... how?

saulgoode
February 1st, 2009, 05:15 AM
I don't see that problem. The infinite series is a limit, there is no sum with an infinite number of terms, it is a limit and nothing more.
I disagree. I can conceive of and work with sums with an infinite number of terms whether or not they converge; just as I can work with the concept of infinity though it is not a number. These sums are legitimate concepts even when there is no real number boundary value which they are guaranteed not to exceed.


And since the series converges, it is a real number, so no problems there.
The limit is a real number. If you equate a limit to its series then I see a problem. The fact that you can determine a real number limit of a sum of an infinite number of terms does not mean the sum is that real number.


It is sort of an abuse of notation, but every step has a well defined counterpart if we were working with the series instead of the decimal expansion.

I agree it is somewhat an abuse of notation. If it were stated as 'lim[0.999...]==1', I would not find the equation "disturbing" at all. It is only when the inference is made from the notation that the limit itself equates with its infinite sum that I protest. By the same logic that I would object to the lim[1/infinity] being equated to 1/infinity, I protest the idea that the limit of a sum with an infinite number of terms is the same as that sum.

AlbinoButt
February 1st, 2009, 05:22 AM
The problem with dealing with infinites is that it's all theoretical and there's no real concrete, real world infinite that we can use, so it's all just a game of logic. My small, unschooled brain uses logic as best as it can: for every 9 we add to the sequence, it makes the difference smaller and smaller to become one:


0.9
+.1
-----
1.0


0.99
+.01
--------
1.00

0.999
+.001
---------
1.000

0.9999
+.0001
-----------
1.0000


the more nines you add to the top line, the smaller the number on the bottom gets which you need to add up in order to get one. If you keep adding more and more 9s (forever) you'll have to keep adding zeros on the bottom (forever). But no matter how many 9s you append, it will never truly be equal to 1 since you still have to add an infinitesimally small amount to it.

MaxIBoy
February 1st, 2009, 06:59 AM
Interestingly, 0.1 is a repeating number in binary. Try entering 0.1 in gcalctool, then converting it to binary. You will get something like the attached image.


There is probably at least one number system where 1 can only be expressed as a repeating number.

jomiolto
February 1st, 2009, 07:29 AM
As a bonus brainteaser, lets depart from this calculus and get to something a little more computer science related. Take the set {1,2,3,...}. We can all agree that it is a subset of {0,1,2,3,...}, as 0 is a member of the second set, but not the first. However, mathematically speaking, these sets are the same size... how?

Well, that's simple: {1,2,3,...} is the same set as {1,2,3,4,...}, which has 4 + infinite members -- that's the same amount of members {0,1,2,3,...} has! Magic! :p

Seriously, though, I don't see why this should be a problem; no sane person ever uses the decimal system for anything important anyway ;) All the cool stuff happens with actual numbers, such as fractions, π, sqrt(2), etc. :biggrin:

EV500B
February 1st, 2009, 07:32 AM
I've vote: 0.333... == 1/3 and 0.999... != 1
1/3 and 0.333... appears to be equal but 0.999... seems to be different from 1.
Why?
(Trying to make it as simple as possible)
0.999... != 1
but
0.999... + 0.000...1 == 1
That's what i think!

I've been too late, question already answered.

igknighted
February 1st, 2009, 07:47 AM
I've vote: 0.333... == 1/3 and 0.999... != 1
1/3 and 0.333... appears to be equal but 0.999... seems to be different from 1.
Why?
(Trying to make it as simple as possible)
0.999... != 1
but
0.999... + 0.000...1 == 1
That's what i think!

I've been too late, question already answered.

The only answers that make any sense are they both are equal, or they both are not. 1/3 * 3 is unquestionably 1, yes? If so, then .333... * 3 must also equal one. But do the arithmetic, and .333... * 3 = .999... (start as far right of the radix as you want and do it 2nd grade style... it equals .9999999...).

You cannot truly represent 1/3 with a true decimal number. .3 (with a bar over the 3) represents a hypothetical infinite sequence of digits. But no matter how many 3's you put there, you can never actually get 1/3. It is the .333... that is defined as representing 1/3.

smartboyathome
February 1st, 2009, 07:49 AM
I've vote: 0.333... == 1/3 and 0.999... != 1
1/3 and 0.333... appears to be equal but 0.999... seems to be different from 1.
Why?
(Trying to make it as simple as possible)
0.999... != 1
but
0.999... + 0.000...1 == 1
That's what i think!

I've been too late, question already answered.

Technically thats true, but for the sake of mathematicians' sanities, it is accepted that it is close enough to one to be called one.

phrostbyte
February 1st, 2009, 10:30 AM
Now, can anyone tell me why n/0 doesn't equal ∞?

Because 0 multiplied by itself an infinite number of times is still 0. No matter how many multiplications you do you'll never reach a value other then 0, so division by zero at least in the real number system is impossible.

pp.
February 1st, 2009, 10:39 AM
The expression "0.999... == 1" (and similar ones) is neither disturbing nor undisturbing. It's just wrong.

The expression "1" is a number, hence a well-defined magnitude. The expression "0.999..." outlines a never ending procedure. A magnitude is not the same as a procedure.

Of course, if you keep on applying the procedure implied by "0.999..." a sufficient number of times, the absolute difference between the result and 1 will become smaller than any arbitrarily chosen positive value.

It's just a "typographical artefact", a shortcoming of the number system.

oedipuss
February 1st, 2009, 11:59 AM
Why is it a procedure ?
It can represent a procedure, but it is a number with a defined magnitude, or a different representation of 1.

Any number (infinite digits or not) can be seen as a procedure, but they all have an exact magnitude. I'm thinking of pi, for example. Infinite series (irrational too, you can't write it down), but as a number it's precise and exact, as can be seen in any circle.

techmarks
February 1st, 2009, 12:25 PM
When I just see this;

0.999... = 1

I would say that's wrong.

BUT..

This thread is somewhat confusing.

It seems you are talking about an infinite sequence and a geometric series at the same time.

The series in question is this;

0.9 + 0.09 + 0.009 + 0.0009 + 0.00009 + ....

partial sums gives us the value 0.9999... to however many 9's you like.

notice that the individual terms are getting closer and closer to 0, calculus tells us that the series must then converge to a real number.

This is a geometric series and the common ratio between the terms is 0.1
and the first term of the series is .9

So using a well known formula from Calculus
we can calculate the infinite sum

.9 / (1-.1)= 1

So the infinite sum of the series is 1,

In other words

the infinite sum

0.9+0.09+0.009+0.0009+....

is equal to one

but if you just write a thing like;

0.9999...= 1 that looks wrong to me.

techmarks
February 1st, 2009, 01:38 PM
.

pp.
February 1st, 2009, 01:50 PM
I'm thinking of pi, for example. Infinite series (irrational too, you can't write it down), but as a number it's precise

quite: you can not represent the value of pi correctly in terms of digits, although the value of pi is, of course, anything but undefined.

Niksko
February 1st, 2009, 02:00 PM
Randall Munroe of xkcd did a good thing banning this discussion from the xkcd forums.

Even so, I firmly believe that 0.999... taken to mean 0. with an infinite number of nines after it is equal to 1. If you ask mathematicians this they will say the same thing. I figure, why argue with the people who do this stuff for a living. I'm not following it blindly, I understand the reasoning behind it, but this is what I tell people who don't believe me.

As to why some people find it disturbing (I don't), my uncle who has a Phd in Maths said it was because people need to get away from the notion that there is only one way to represent a number.

techmarks
February 1st, 2009, 02:19 PM
Randall Munroe of xkcd did a good thing banning this discussion from the xkcd forums.

Even so, I firmly believe that 0.999... taken to mean 0. with an infinite number of nines after it is equal to 1.


I suppose if 0.999 ... with the three dots at the end is taken to mean that we are looking at the infinite series I mentioned above, then it is correct.

But still the sigma notation to represent the sum (infinite) of the series seems much clearer to me, but we can use either notation in the end, once we know what we are talking about, it is the same.

glotz
February 1st, 2009, 03:03 PM
Nice discussion.

Another funny topic would be 1=2 for sufficiently large values of 1.

jimi_hendrix
February 1st, 2009, 03:09 PM
well first do the equation 1/3...you will find it equates to .333...

however, 1/1 != .999... it equals 1

problem solved
</thread> :)

sisco311
February 1st, 2009, 03:35 PM
I've vote: 0.333... == 1/3 and 0.999... != 1
1/3 and 0.333... appears to be equal but 0.999... seems to be different from 1.
Why?
(Trying to make it as simple as possible)
0.999... != 1
but
0.999... + 0.000...1 == 1
That's what i think!

I've been too late, question already answered.


Technically thats true, but for the sake of mathematicians' sanities, it is accepted that it is close enough to one to be called one.

false.
0.000...1 = mathematical nonsense

sisco311
February 1st, 2009, 04:05 PM
As a bonus brainteaser, lets depart from this calculus and get to something a little more computer science related. Take the set {1,2,3,...}. We can all agree that it is a subset of {0,1,2,3,...}, as 0 is a member of the second set, but not the first. However, mathematically speaking, these sets are the same size... how?

easy one.

f:{0, 1, 2, ...} -> {1, 2, ...}
f(n)=n+1 is bijective. Q.E.D.

In other words, in Hilbert's Grand Hotel,it is possible to make room for a new client, even if every room is occupied. ;)

aaaantoine
February 1st, 2009, 07:58 PM
This is my understanding. Take a simple calculator, and do these calculations.

1/3 = .33333333
2/3 = .66666667

.33333333 + .66666667 = 1

But, of course...

.33333333 + .33333333 = .66666666; + .33333333 = .99999999

0.99999999 is not 1

But 0.999... is 1, at least according to some mathematicians' answers above.

I think it's better to just express these numbers as 1/3, 2/3, and 3/3 (1).

gjoellee
February 1st, 2009, 08:03 PM
If we take a look into it, there is no such thing as 1/3 because we can't get the exact number. 0.3333333333333333..... it has no end!

qazwsx
February 1st, 2009, 08:20 PM
Depends on measurment's accuracy when source is experimental or it requires testing.

In pure form mathematics I never mix them up.

:popcorn:

sisco311
February 1st, 2009, 08:23 PM
If we take a look into it, there is no such thing as 1/3 because we can't get the exact number. 0.3333333333333333..... it has no end!

So, you can't cut a cake in 3 equal pieces, because there is no such thing as 1/3=.333...=0.333...=0.(3) .

And if we look into it deeper, there is no such thing as 1.
1=1.000... it has no end!!!

Rinzwind
February 1st, 2009, 08:23 PM
If we take a look into it, there is no such thing as 1/3 because we can't get the exact number. 0.3333333333333333..... it has no end!

I disagree.
If you follow your logic then you also that there is no suc thing as PI or E.

1/3 exists. Its notation is 0. with a repetition of 3's followed by 3 dots: 0.333... or 0.3... if you are lazy or 0.33333333... if your 3 stucks on your keyboard :)

Very simple. Mathematicians came to an understanding accepting 1/3 exists because of the way they think about division and multiplying when using integers.

igknighted
February 1st, 2009, 08:33 PM
I disagree.
If you follow your logic then you also that there is no suc thing as PI or E.

1/3 exists. Its notation is 0. with a repetition of 3's followed by 3 dots: 0.333... or 0.3... if you are lazy or 0.33333333... if your 3 stucks on your keyboard :)

Very simple. Mathematicians came to an understanding accepting 1/3 exists because of the way they think about division and multiplying when using integers.

1/3 is precise. The problem is a limitation in the decimal (base 10) number system. Switch to a base 3 number system and you get 0.1, an exact represntations. But good luck accurately representing 1/2 in base 3.

mkendall
February 1st, 2009, 08:39 PM
Now, can anyone tell me why n/0 doesn't equal ∞?

If you approach 0 from the positives, that is, you look at increasingly smaller numbers (1, 0.1, 0.01, 0.001, ...) then it would appear that n/0 should be ∞. However, if you approach 0 from the negatives, (-1,-0.1,-0.01,-0.001, ...) then it would appear that n/0 should be -∞. n/0 cannot be both ∞ and -∞, therefore it is undefined.

(And don't try giving me any Schroedinger crap about it. Few people today realize that his cat thought experiment was an argument against dualism, not for it.)

gn2
February 1st, 2009, 08:42 PM
Perhaps explaining the significance of the three little dots might help.

0.333 does not equal one third, but the dots in 0.333... indicate that it is a repeating number and it does equal one third.

Another way of writing it is with a vinculum over the three http://upload.wikimedia.org/math/d/7/9/d7926c11624c211715d42a57e39b5832.png

Further reading (http://en.wikipedia.org/wiki/Recurring_decimal).

nothingspecial
February 1st, 2009, 09:09 PM
It`s a little bit smaller coz you can`t express 1/3 as a decimal. What`s the problem?

jpkotta
February 1st, 2009, 09:28 PM
I disagree. I can conceive of and work with sums with an infinite number of terms whether or not they converge; just as I can work with the concept of infinity though it is not a number. These sums are legitimate concepts even when there is no real number boundary value which they are guaranteed not to exceed.

True, but as soon as you start comparing them to numbers, you have to think of them as limits. Of course an infinite series isn't exactly the same concept as a constant number, but the equals sign doesn't care about concepts. It asks only if the values are the same. sum(2^-n, n, 1, infty) isn't the same as the series represented by 0.999..., but they are equal, and that's all that matters here.


The limit is a real number. If you equate a limit to its series then I see a problem. The fact that you can determine a real number limit of a sum of an infinite number of terms does not mean the sum is that real number.

I agree it is somewhat an abuse of notation. If it were stated as 'lim[0.999...]==1', I would not find the equation "disturbing" at all. It is only when the inference is made from the notation that the limit itself equates with its infinite sum that I protest. By the same logic that I would object to the lim[1/infinity] being equated to 1/infinity, I protest the idea that the limit of a sum with an infinite number of terms is the same as that sum.

Again, I say there is no way to interpret 0.999... other than an infinite series, and like I say above, the only sensible way to compare it to 1 is as a limit.