Oh right, %g. Well, I tried it and it seems to be what I want. Thanks. However, it's messing up when it comes to simplifying 0s. Eg:
The last bit should be 0i...Code:This program will calculate the roots of a quadratic equation with form ax² + bx + c = 0 Please enter a value for a: 1 Please enter a value for b: 2 Please enter a value for c: 1 The roots of the quadratic equation 1x² + 2x + 1 = 0 are: 1) -1 + 1.96747e-153i 2) -1 - 1.96747e-153i
(This is with %lg btw, I'm using doubles rather than floats.)
Welcome to the world of limited-precision arithmetic, where things that clearly should be zero aren't exactly. For dealing with this, it's not a bad idea to have a normalization routine that rounds to the nearest 10^-6 or so (depending on your desired level of precision) -- although with complex numbers, you need to apply this to the individual components.
Forgetting about this can really cause problems implementing numerical algorithms, as stuff that will provably converge at a certain rate in pure mathematics may end up not converging because of the approximations used by the machine.
Hmm, thanks for the advice. I guess I'm just really suprised about how minimalist C is compared to python but that is to be expected really.
This takes me back to my student days - more years ago than I sometimes care to remember. One of the topics covered on the courses I was on was estimates of errors and how they can propogate when results are passed through several sets of calculations. It might not make much of a difference in the examples used in homework assignments, but it's still useful to be aware of these things.
Forum DOs and DON'Ts
Please use CODE tags
Including your email address in a post is not recommended
My Blog
Does this affect C++ as well? I'm thinking that that will be a more useful language for me to use when I need optimised code, rather than C.
There are almost certainly some nice arbitrary-precision arithmetic libraries for C that may help alleviate these issues with some penalty (having to use routines written for those types rather than the standard C library routines, for instance; likely to be slower and more space-hungry as well).
But C by itself is normally using native types as much as possible, and doubles and floats use IEEE floating-point representations -- which having a finite and fixed number of bits cannot possibly enumerate the (infinite) number of values within any given range. Some numbers simply have to be approximated, and arithmetic tends to compound the approximations. If you add an extremely tiny number to an extremely large number, for instance, the result may not be what you want.
Something else to keep in mind is that these computations aren't necessarily reproducible exactly across platforms. For people writing networked games with heavy use of floating-point arithmetic, one would probably want to ensure that all machines use the same approximations or at least that the deviations don't accumulate into something wildly out of synch.
Edit: yes to C++ as well. Same reasons.
Last edited by Some Penguin; February 12th, 2010 at 02:05 AM. Reason: Added mention of C++.
Mmmm, seems like quite a big problem. My main use would probably be in astrophysics computer modelling but that won't be for a few years so hopefully I get enough practise with C/C++ to recognise these problems.
Bookmarks