PDA

View Full Version : Alarming Open-Source Security Holes



Sporkman
May 20th, 2008, 05:49 PM
This is a good write-up on what actually happened...

http://www.technologyreview.com/Infotech/20801/


Alarming Open-Source Security Holes

How a programming error introduced profound security vulnerabilities in millions of computer systems.

By Simson Garfinkel

Back in May 2006, a few programmers working on an open-source security project made a whopper of a mistake. Last week, the full impact of that mistake was just beginning to dawn on security professionals around the world.

In technical terms, a programming error reduced the amount of entropy used to create the cryptographic keys in a piece of code called the OpenSSL library, which is used by programs like the Apache Web server, the SSH remote access program, the IPsec Virtual Private Network (VPN), secure e-mail programs, some software used for anonymously accessing the Internet, and so on.

In plainer language: after a week of analysis, we now know that two changed lines of code have created profound security vulnerabilities in at least four different open-source operating systems, 25 different application programs, and millions of individual computer systems on the Internet. And even though the vulnerability was discovered on May 13 and a patch has been distributed, installing the patch doesn't repair the damage to the compromised systems. What's even more alarming is that some computers may be compromised even though they aren't running the suspect code.

The reason that the patch doesn't fix the problem has to do with the specifics of the programmers' error. Modern computer systems employ large numbers to generate the keys that are used to encrypt and decrypt information sent over a network. Authorized users know the right key, so they don't have to guess it. Malevolent hackers don't know the right key. Normally, it would simply take too long to guess it by trying all possible keys--like, hundreds of billions of years too long.

But the security of the system turns upside down if the computer can only use a limited number of a million different keys. For the authorized user, the key looks good--the data gets encrypted. But the bad guy's software can quickly make and then try all possible keys for a specific computer. The error introduced two years ago makes cryptographic keys easy to guess.

The error doesn't give every computer the same cryptographic key--that would have been caught before now. Instead, it reduces the number of different keys that these Linux computers can generate to 32,767 different keys, depending on the computer's processor architecture, the size of the key, and the key type.

Less than a day after the vulnerability was announced, computer hacker HD Moore of the Metasploit project released a set of "toys" for cracking the keys of these poor Linux and Ubuntu computer systems. As of Sunday, Moore's website had downloadable files of precomputed keys, just to make it easier to identify vulnerable computer systems.

Unlike the common buffer overflow bug, which can be fixed by loading new software, keys created with the buggy software don't get better when the computer is patched: instead, new keys have to be generated and installed. Complicating the process is the fact that keys also need to be certified and distributed: the process is time consuming, complex, and error prone.

Nobody knows just how many systems are impacted by this problem, because cryptographic keys are portable: vulnerable keys could have been generated on a Debian system in one office and then installed on a server running Windows in another. Debian is a favored Linux distribution of many security professionals, and Ubuntu is one of the most popular Linux distributions for general use, so the reach of the problem could be quite widespread.

So how did the programmers make the mistake in the first place? Ironically, they were using an automated tool designed to catch the kinds of programming bugs that lead to security vulnerabilities. The tool, called Valgrind, discovered that the OpenSSL library was using a block of memory without initializing the memory to a known state--for example, setting the block's contents to be all zeros. Normally, it's a mistake to use memory without setting it to a known value. But in this case, that unknown state was being intentionally used by the OpenSSL library to help generate randomness.

The uninitialized memory wasn't the only source of randomness: OpenSSL also gets randomness from sources like mouse movements, keystroke timings, the arrival of packets at the network interface, and even microvariations in the speed of the computer's hard disk. But when the programmers saw the errors generated by Valgrind, they commented out the offending lines--and removed all the sources of randomness used to generate keys except for one, an integer called the process ID that can range from 0 to 32,767.

"Never fix a bug you don't understand!" raved OpenSSL developer Ben Laurie on his blog after the full extent of the error became known. Laurie blames the Debian developers for trying to fix the "bug" in the version of OpenSSL distributed with the Debian and Ubuntu operating systems, rather than sending the fix to the OpenSSL developers. "Had Debian done this in this case," he wrote, "we (the OpenSSL Team) would have fallen about laughing, and once we had got our breath back, told them what a terrible idea this was. But no, it seems that every vendor wants to 'add value' by getting in between the user of the software and its author."

Perhaps more disconcerting, though, is what this story tells us about the security of open-source software--and perhaps about the security of software in general. One developer (who I've been asked not to single out) noticed a problem, proposed a fix, and got the fix approved by a small number of people who didn't really understand the implications of what was being suggested. The result: communications that should have been cryptographically protected between millions of computer systems all over the world weren't really protected at all. Two years ago, Steve Gibson, a highly respected security consultant, alleged that a significant bug found in some Microsoft software had more in common with a programmer trying to create an intentional "back door" than with yet another Microsoft coding error.

The Debian OpenSSL randomness error was almost certainly an innocent mistake. But what if a country like China or Russia wanted to intentionally introduce secret vulnerabilities into our open-source software? Well concealed, such vulnerabilities might lay hidden for years.

One thing is for sure: we should expect to discover more of these vulnerabilities as time goes on.

PmDematagoda
May 20th, 2008, 05:58 PM
Really saying, that news article was a bit exaggerated, for one thing because the security hole was not breached to a great extent to really cause much harm and second because of the fact that not all people use SSL(I don't).

And really:-

Back in May 2006, a few programmers working on an open-source security project made a whopper of a mistake. Last week, the full impact of that mistake was just beginning to dawn on security professionals around the world.

How many times have we seen companies like MS and Apple doing such things like this(leaving a security hole unpatched for quite a long while) and they get nothing much but a slap on the wrist most of the time and yet when the FOSS community makes a mistake there are news articles like this which makes a huge deal of this.

Edit:- But I must admit, Debian should have maintained a bit more communication with the OpenSSL team, that might have prevented this from happening.

Half-Left
May 20th, 2008, 06:05 PM
Yep, they fixed it as SOON as they found out.


But what if a country like China or Russia wanted to intentionally introduce secret vulnerabilities into our open-source software? Well concealed, such vulnerabilities might lay hidden for years.

It's funny how they point the finger at china or Russia like it's some cold war thing, silly statement if ever I saw one.

plun
May 20th, 2008, 06:50 PM
Really saying, that news article was a bit exaggerated, for one thing because the security hole was not breached to a great extent to really cause much harm and second because of the fact that not all people use SSL(I don't).



Well, lets read what Mr Bruce Schneier wrote....

http://www.schneier.com/blog/archives/2008/05/random_number_b.html

You can also study HD Moores little article at metasploit.

I hope Debian can arrange better QA to avoid something like this within future. Just terrible...:(

LeoSolaris
May 20th, 2008, 08:16 PM
Yes, it was a fairly nasty breach, but it was patched in what... a day? I personally see little danger here, especially compared to the security holes that have been in the two widely used, proprietary OS's. This was a big deal because it was one of so very few major security opening in the *nix world. It's more of a blow to our image than to our security.

Leo

P.S. This was a security hole, not holes. If you kept up with your updates, this was patched last week. I noticed it first on the Fridge in the forum, and made sure to install the patch right then.

Sporkman
May 20th, 2008, 08:22 PM
P.S. This was a security hole, not holes. If you kept up with your updates, this was patched last week. I noticed it first on the Fridge in the forum, and made sure to install the patch right then.

Patching isn't enough - weak keys also need to be regenerated, if I'm understanding it correctly.

kef_kf
May 20th, 2008, 08:26 PM
two things i learned from the debian - ssl debacle:

never touch code you dont understand
always do things *exactly by the book

dca
May 20th, 2008, 08:34 PM
Yes, it was a fairly nasty breach, but it was patched in what... a day? I personally see little danger here, especially compared to the security holes that have been in the two widely used, proprietary OS's. This was a big deal because it was one of so very few major security opening in the *nix world. It's more of a blow to our image than to our security.

Leo

P.S. This was a security hole, not holes. If you kept up with your updates, this was patched last week. I noticed it first on the Fridge in the forum, and made sure to install the patch right then.

Patched in a day, yet the vulnerability existed for two years if I read that correct...

LeoSolaris
May 21st, 2008, 04:32 AM
Yes, it existed for two years. I'm not defending that one, but we are human. The only way to make your data perfectly safe on a computer is to unplug it, cover it with cement, then drop it into the middle of the ocean. Otherwise it is always vulnerable to something.

The reason it isn't as major a deal as it is being made out to be is not only because it was fixed in just a few hours after discovery, but also because this level of mistake lasting for this long is rare. Usually this sort of thing is caught faster. (Usually being the key word there.)

Yes, so the keys need to be regenerated, and that takes time to redistribute them To get everything back to the way it was supposed to be in the first place.

The messed up part of this is the fact that OpenSSL completed a 5 year certification process backed by the US and Canadian Governments back in February. Link (http://www.linux.com/articles/60114) But the Debian programmers thought they knew better, and corrected something they thought was wrong. It happens. As I said, we are only human.

Leo

akiratheoni
May 21st, 2008, 04:45 AM
Last time I checked, developers were humans, not gods. It's like the first big mistake that I've heard of, and Microsoft and the like has made how many?

cardinals_fan
May 21st, 2008, 04:55 AM
I'm very fond of Slackware precisely because packages are left unadulterated and very vanilla.

perfecttao
May 23rd, 2008, 01:15 PM
Unfortunately this has pretty much killed Ubuntu stone dead for me - not due to Ubuntu itself, but due to the reasons this issue has taken place (ie. Debian developers fixing issues without reporting them upstream to the original developers who wrote the package).

It needs to be emphasised that it's not just a case of applying the updates and hoping for the best. EVERYONE who has generated a keypair on the affected versions of Debian, Ubuntu, Kubuntu, Xubuntu, Mint, and so on then need to regenerate any key pairs that they have created...that means for OpenSSH, OpenSwan, IPSec, Courier, Dovecot, BIND9, Exim, Asterisk, and so on and so on and so on.......

It's a sad day for me (not least because I'm currently planning a migration of loads of servers and workstations) but because I love Ubuntu, the support, community and the distro itself.

I appreciate that this is an isolated incident and this is no worse than MS have done in the past, but the issue occurred as incorrect following of procedures....if these procedures have been neglected once, then I'd be willing to bet that they've been disregarded on many other occasions....causing countless other possibilities of security issues (although granted, possibly not on a similar scale)...

A sad day :(

hyper_ch
May 23rd, 2008, 01:30 PM
(ie. Debian developers fixing issues without reporting them upstream to the original developers who wrote the package).

but they did report that fix upstream and nobody complained.

scorp123
May 23rd, 2008, 01:42 PM
How many times have we seen companies like MS and Apple doing such things like this(leaving a security hole unpatched for quite a long while) and they get nothing much but a slap on the wrist most of the time and yet when the FOSS community makes a mistake there are news articles like this which makes a huge deal of this. But the problem here is: It was the FOSS community which claimed that "many eyes see many things" and that "peer review of code" beats "development behind closed doors". That argument just got a big fat kick into its rear end because obviously there must be something seriously wrong with it or how else can it be explained that nobody found this security hole for almost 2 years??? And that this security hole made its way into Ubuntu is troubling too: Because this means that the people at Canonical are happy importing everything from Debian without bothering to review the code they just imported .....

That's the real problem here, and that's why everybody is making a big deal and a fuss over it.

bikeboy
May 23rd, 2008, 02:09 PM
The funniest thing about this report/thread...The issue has nothing to do with whether software is open-source or not. Nothing to do with the development strategy, other than that someone made a mistake with existing software. Any company could have licensed an equivalent closed-source program, mis-configured it and ended up with the same situation.

scorp123
May 23rd, 2008, 02:58 PM
The funniest thing about this report/thread...The issue has nothing to do with whether software is open-source or not. Not true! For over two years the Debian packagers did not bother to check the patched code! This is not like getting a binary DLL (e.g. from Microsoft) you have to write code against without really knowing how precisely the functions are implemented on a C source code level. This was completely in the open ... and it took two years for a pair of eyeballs to catch this bug. This is bad.


Nothing to do with the development strategy Not true either. According to the gospel that's being spread the opensource community development model should have caught this bug earlier, because in theory every of those programmers is reviewing every single line of code that gets added to a project. That's why it is repeated again and again and again all over the web that opensource software is "secure", e.g. if there is a security hole then the peer-review process should catch it in no time and a fix should soon be available.

Nothing of this was true here: The bug was undetected for two years, and the patch doesn't yet fix anything per se: You have to regenerate your keys and hunt down vulnerable keys everywhere, where they might be in use.

This is a serious pain in the a***.


Any company could have licensed an equivalent closed-source program, mis-configured it and ended up with the same situation. The difference being that with closed source software where only a limited number of eyeballs can take a glance at the code in question it is more likely that bugs and security holes remain undetected for a long time .... Or so is the theory of why opensource software supposedly is "superior" to closed source security-wise.

Mr. Picklesworth
May 23rd, 2008, 03:04 PM
What we need is a system similar to a bug tracker, specialized for peer review. When a change is done to a program, it is posted on the "review tracker". Each entry there could have a different priority level, with the rule of thumb being that important security tools automatically get Highest priority no matter what.
Such a tracker could even be automated, watching for source code commits to launch new items For Review.

Items on the review tracker could have a Topic->Issue->Solution->Issue->Solution type of discussion system so that issues could easily be found. (If this were on Launchpad, adding bug reports as issues could be rather straight-forward). A solid release would have everything on the review tracker without posted issues, but with confirmation that things work. That tracker knowing upstream contacts would help a lot, too!

hyper_ch
May 23rd, 2008, 05:29 PM
Or so is the theory of why opensource software supposedly is "superior" to closed source security-wise.

http://scan.coverity.com/report/

maniacmusician
May 23rd, 2008, 06:29 PM
According to the gospel that's being spread the opensource community development model should have caught this bug earlier, because in theory every of those programmers is reviewing every single line of code that gets added to a project.

That's complete and utter nonsense. Anyone who actually thought that every bit of code goes through intense peer review is deluded. It depends on the project (KDE code gets peer reviewed because there's a lot of people working in the same repository. Exaile code probably doesn't, because it's an isolated project), and the situation.

Of course this sucks, but I'll quote something I recently read:


I’ve been playing with open source software for over a decade now, and lots of my friends do the same. I don’t know of anyone who makes a habit of randomly reading source code just to see if it’s up to snuff. People will read it when they want to modify it, but if it’s complex, hard to read, or works well enough, chances are that it won’t get reviewed.

As to this code being exploited: crackers are just as lazy as us white hats. Most known attacks don’t occur until after a patch has been released, so chances are, it’s been safe (until know).

http://cody.zapto.org/?p=21

Sums it up nicely, I think.

scorp123
May 23rd, 2008, 09:25 PM
That's complete and utter nonsense. Anyone who actually thought that every bit of code goes through intense peer review is deluded. I know. But please go through the forums and you might here and there find postings which claim that. That's precisely how some folks misinterpret the opensource development model. I can see it on my job too. We had convinced a few customers to forget about Windows and switch over to Linux. We had convinced them that Linux could do everything they need to do. Now thanks to this OpenSSL "hickup" some managers are having second thoughts again and we can start over all those lengthy discussions again. This stuff and how its sometimes misinterpreted or mispresented in some stories is having a direct impact on my job and it's a PITA.


Of course this sucks You sure got that right :D


I’ve been playing with open source software for over a decade now, and lots of my friends do the same. I don’t know of anyone who makes a habit of randomly reading source code just to see if it’s up to snuff. People will read it when they want to modify it, but if it’s complex, hard to read, or works well enough, chances are that it won’t get reviewed. Well, the how and why this bug made it into Debian's OpenSSL is well documented!
the source code portions in question were poorly commented, e.g. it was hard to read for the guy who "patched" the relevant sequence.
The programmer in question should not have touched stuff in a security-relevant package if he did not understand what it does
when asked if the "patch" is OK the OpenSSL devs simply ignored the request first and then didn't really bother to read precisely what the question was and gave their "OK, go ahead ... "


In my opinion they should have checked what patch was being committed to such a sensitive package such as OpenSSL ... And Debian's folks should not have forked the code in the first place. Why can't they just use the original dev's source (which did not have the bug) like everyone else does?

phaed
May 23rd, 2008, 09:50 PM
But the problem here is: It was the FOSS community which claimed that "many eyes see many things" and that "peer review of code" beats "development behind closed doors". That argument just got a big fat kick into its rear end

The fallacy of hasty generalization. You can't extrapolate this conclusion from a single data point. There are numerous counter examples. Compare, for example, the security vulnerabilities of Firefox vs. Internet Explorer, and the timeliness with which they are patched.

Paqman
May 23rd, 2008, 10:00 PM
By Simson Garfinkel

Pseudonym, surely?

K.Mandla
May 23rd, 2008, 11:03 PM
I don't find the issue as alarming as I find the article overdramatic.

And it certainly doesn't invalidate the open source model for me.

gsmanners
May 23rd, 2008, 11:09 PM
No, but there are no end of PHBs screaming and throwing a fit and demanding that Debian be removed and anyone within a mile of this problem burned at the stake. The FUD shills are milking this one for all its worth.

scorp123
May 23rd, 2008, 11:11 PM
The fallacy of hasty generalization. I know. Nontheless, that's more or less how some FOSS advocates have argumented the case of "OSS vs. CS" and "Why FOSS is superior" in the past (or rather: how it was misunderstood by some). I personally still think that this argument is true: OSS is better and yes, it does have a better chance of having bugs sooner detected and fixed than closed source stuff. It's just that this OpenSSL hickup will now be paraded around ad infinitum as an "example" where it did not work and I am already now tired of the silly discussions I will have because of this, because I already know in advance that there will be such discussions ...


You can't extrapolate this conclusion from a single data point. I love that sentence. Mind if I quote you? :)

Manager: "Linux is unsafe! The SSL-implementation in some distros can be hacked!! ... "

Me: "You can't extrapolate this conclusion from a single data point ... "

Yeap, that might even work. Wonderful :D

swoll1980
May 23rd, 2008, 11:14 PM
I never liked the "but they did it too defence" when something goes wrong. My 5 year old son does this and I say to him "Two wrongs don't make a right"

scorp123
May 23rd, 2008, 11:28 PM
And it certainly doesn't invalidate the open source model for me. Not for me, not for you. But you see, people like me have to work with people who don't share this POV and who were a PITA to begin with to even consider FOSS for anything. As I wrote above: This kind of hickup shouldn't have happened, because now it will be paraded around by some folks why FOSS is "unsafe", "insecure" and why "their development model doesn't work". I already had such a discussion this morning and I already know that more such discussions will follow, and already now I am tired like hell of it. It's a different story if I can talk to system admins directly ... those guys are cool. They just install the new packages, re-create all the keys, and voila, done. But higher up the hierarchy ladder things are complicated beyond complicated. Managers who have no clue about IT making decisions about IT ... these people are scary. And trust me, if they ever needed an example why FOSS can't be trusted then it is this hickup. Talking to such people was already quite tiring, and hearing them repeat the same FUD arguments over and over again was a PITA ... But now? Oh maaaaan. Really. This hickup isn't helpful at all.

heartburnkid
May 24th, 2008, 12:08 AM
People are people, and things sometimes slip through the cracks. OSS is great, and more eyes does mean less buggy code, but guess what? It's still not completely bug-free, because no software ever is.

This doesn't invalidate any of the points about open-source software any more than a plane crash invalidates the fact that flight is still the safest form of travel. It's one incident, nothing more, nothing less.

jrusso2
May 24th, 2008, 12:20 AM
But the problem here is: It was the FOSS community which claimed that "many eyes see many things" and that "peer review of code" beats "development behind closed doors". That argument just got a big fat kick into its rear end because obviously there must be something seriously wrong with it or how else can it be explained that nobody found this security hole for almost 2 years??? And that this security hole made its way into Ubuntu is troubling too: Because this means that the people at Canonical are happy importing everything from Debian without bothering to review the code they just imported .....

That's the real problem here, and that's why everybody is making a big deal and a fuss over it.

I see that as the real problem here. I mistake was made but open source is supposed to be under constant review so that serious issues are not left unfixed for almost two years.

Maybe Open Source is not all its cracked up to be?

heartburnkid
May 24th, 2008, 12:36 AM
I see that as the real problem here. I mistake was made but open source is supposed to be under constant review so that serious issues are not left unfixed for almost two years.

For the most part they aren't. But people are people, and, to quote a wise man, "**** happens."

mkendall
May 24th, 2008, 02:07 AM
Two questions:

1. How long did Windows have a security issue involving the pointer?

2. For how many different Windows releases did this security issue propagate?

Sporkman
May 24th, 2008, 02:30 AM
This of course was subtle bug. Could such subtle bugs be purposefully introduced by malicious or compromised contributors?

DoktorSeven
May 24th, 2008, 02:41 AM
Alarmist much?

Accidents happen. Bad code happens. Lack of proper review and introducing bad code into the wild without checking happens. Programmers are human, mistakes will be made.

It is what happens when bugs ARE found that matters. Microsoft can sit on known issues for months -- it's been known to happen. Meanwhile, malicious code has time to spread and bad things happen. And since Microsoft is the only one that can fix these problems, its users are powerless.

In open source, when a problem is found, typically patches and fixes happen fast. And in this case, it did, though the problem extends beyond a mere patch, but that's the responsibility of everyone using it. But still, problem resolution happened quickly once found.

I don't see any problems here. The open-source model worked exactly as it should. I can't imagine anyone except Microsoft apologists being alarmed over what happened.

akiratheoni
May 24th, 2008, 02:49 AM
Alarmist much?

Accidents happen. Bad code happens. Lack of proper review and introducing bad code into the wild without checking happens. Programmers are human, mistakes will be made.

It is what happens when bugs ARE found that matters. Microsoft can sit on known issues for months -- it's been known to happen. Meanwhile, malicious code has time to spread and bad things happen. And since Microsoft is the only one that can fix these problems, its users are powerless.

In open source, when a problem is found, typically patches and fixes happen fast. And in this case, it did, though the problem extends beyond a mere patch, but that's the responsibility of everyone using it. But still, problem resolution happened quickly once found.

I don't see any problems here. The open-source model worked exactly as it should. I can't imagine anyone except Microsoft apologists being alarmed over what happened.

I agree. I do realize that this bug was important, but it's making a bigger deal out of a bug than needs to be.

While I do know why doubts are being raised about the open source model, I do think that the open source model is still generally good about fixing bugs. Yes, this does put a huge dent in open source. But how many bugs WERE fixed on time? It really does only take one instance of a bad bug fixing to throw everything off but developers are humans and they make mistakes.

jrusso2
May 24th, 2008, 02:51 AM
Alarmist much?

Accidents happen. Bad code happens. Lack of proper review and introducing bad code into the wild without checking happens. Programmers are human, mistakes will be made.

It is what happens when bugs ARE found that matters. Microsoft can sit on known issues for months -- it's been known to happen. Meanwhile, malicious code has time to spread and bad things happen. And since Microsoft is the only one that can fix these problems, its users are powerless.

In open source, when a problem is found, typically patches and fixes happen fast. And in this case, it did, though the problem extends beyond a mere patch, but that's the responsibility of everyone using it. But still, problem resolution happened quickly once found.

I don't see any problems here. The open-source model worked exactly as it should. I can't imagine anyone except Microsoft apologists being alarmed over what happened.

I don't consider myself to be a Microsoft apologist. But if things are supposed to be under constant review how could it go un noticed for close to two years?

Something was not working as its supposed to and hiding your head in the sand and pointing to Microsoft is not the answer.

days_of_ruin
May 24th, 2008, 03:00 AM
This of course was subtle bug. Could such subtle bugs be purposefully introduced by malicious or compromised contributors?
Don't think so.Everything is supposed to be checked before being added.
Its not like wikipedia.This one just slipped through.

cardinals_fan
May 24th, 2008, 04:48 AM
Two questions:

1. How long did Windows have a security issue involving the pointer?

2. For how many different Windows releases did this security issue propagate?
Why does this matter?

Steve Zenone
May 24th, 2008, 05:10 AM
I don't find the issue as alarming as I find the article overdramatic.

And it certainly doesn't invalidate the open source model for me.

Agreed!

I had written a blog posting Monday with my opinion about how the broader community was responding/reacting to the vulnerability [link (http://blog.zenone.org/2008/05/opinion-why-attack-debian-and-ubuntu.html)]

phaed
May 24th, 2008, 05:23 AM
Manager: "Linux is unsafe! The SSL-implementation in some distros can be hacked!! ... "

Me: "You can't extrapolate this conclusion from a single data point ... "

Yeap, that might even work. Wonderful :D

Then tell him you can reinstall Windows. Break out the company credit card so you can purchase all the antivirus software licenses. I feel safer already.

DoktorSeven
May 24th, 2008, 05:46 AM
I don't consider myself to be a Microsoft apologist. But if things are supposed to be under constant review how could it go un noticed for close to two years?

Because things aren't under "constant review". Bugs and security holes are found whenever people look at them. Again, mistakes can be made and issues can be overlooked, sometimes for very long periods of time which happened here. What matters most is what is done when an issue is found.

This does not invalidate the open-source process. It's always important to have code open so that people can look at it, and this model allows bugs and holes to be found. But just like everything else, it's not perfect. Bugs have lurked in Microsoft's code base for much longer, and when they're found, it takes longer to fix them. They're not perfect either, and I can forgive Microsoft for having issues inside their code just like I can for open-source code. What I can't forgive is them not allowing us even the opportunity to find said issues, and especially not allowing us to close the issues by fixing them ourselves. Users of Windows are tied to Microsoft's patch schedule, which can be terribly slow at times, and again, this leaves known vulnerable systems vulnerable longer, and therefore is more dangerous and insecure. Systems that are not known to be vulnerable can certainly be attacked by people who know the vulnerability and attack in secret, but it is not only less likely to occur especially since it is, by definition, unknown, but also even if it is known by one person or a few, a proof of concept (attack) can expose the vulnerability and actions can be taken against it quickly.

So when you look at the whole picture, you have to understand that mistakes and oversights will happen. It's what you do when those mistakes and oversights are found that make or break effective security.

LightB
May 24th, 2008, 07:38 AM
This one is more against Debian & Ubuntu specifically, not open source in general.

And I'd like to know, is there any evidence available which points to exploits being used related to this in the real world? Maybe some debian servers out there didn't get patched the day the update came out? I also doubt that debian servers are anywhere near a majority among linux servers.

mkendall
May 24th, 2008, 08:11 AM
Why does this matter?

Double standard. MS/Windows gets a pass for many years of porousness, requiring multiple third-party softwares to maintain a semblance of security. But with one instance of insecurity, which has been found and fixed, suddenly people find that porousness preferable.

akiratheoni
May 24th, 2008, 08:39 AM
Double standard. MS/Windows gets a pass for many years of porousness, requiring multiple third-party softwares to maintain a semblance of security. But with one instance of insecurity, which has been found and fixed, suddenly people find that porousness preferable.

I agree... but there's also the issue that even though that the bug is fixed, it still affects everyone who has weak hashes so they can still be attacked despite the patch.

Martje_001
May 24th, 2008, 10:35 AM
I really don't like this.. we have to update to the newest version of OpenSSL...

For the dutch people here: http://forum.pc-active.nl/viewtopic.php?t=20165

scorp123
May 24th, 2008, 11:58 AM
Then tell him you can reinstall Windows. I flat out refuse to touch anything that says "Microsoft" both privately and professionally. They don't want Linux? Fine, they get a Solaris desktop then. Or Sun Ray thinclients. Windows? Not a chance.

perfecttao
May 25th, 2008, 09:36 AM
Personally, I've yet to find anyone who's pointing fingers at this being a *nix issue....

Everyone i've spoken to is referring to it as a Debian **** up and not a *nix **** up....

I don't think this has shaken confidence in Linux, but I think it's shaken confidence in Debian based distros. There will always be MS fans happy to slap down open source at first opportunity, but for a rare change I haven't heard of this being one of those occasions.

scorp123
May 25th, 2008, 10:26 AM
Everyone i've spoken to is referring to it as a Debian **** up and not a *nix **** up.... Lucky you, I suppose you've only been talking to technical people thus far who know the difference? I am not always so lucky, and I keep hearing weird arguments why e.g. this hickup in Debian might have propagated into other distributions too, and therefore all Linux distros are "unsafe" by design ... Nonsense, nonsense, more nonsense. I know.


I don't think this has shaken confidence in Linux, but I think it's shaken confidence in Debian based distros. Yes, definitely. One company I am in contact with wanted to get rid of their SLES installations in favor of Debian ... Those discussions are now definitely over. Looks like they will switch over to RHEL5 ... I personally don't like Red Hat at all, but their management tools ("satellite server" or what it is called?) are a nice addition. "eye candy" for corporate users I guess.


There will always be MS fans happy to slap down open source at first opportunity, but for a rare change I haven't heard of this being one of those occasions. I guess that depends on the people you (have to) talk to. Tech people are more or less 'cool' about this, but when you talk to non-tech staff such as managers you get to hear FUD and more FUD, fresh from the propaganda machinery in Redmond.

misfitpierce
May 25th, 2008, 10:49 AM
eh think we alright... Sounds like they fixed it so eh.... eh

perfecttao
May 25th, 2008, 02:52 PM
One company I am in contact with wanted to get rid of their SLES installations in favor of Debian ...
Looks like they will switch over to RHEL5 ...

I know the feeling.....not knocking Ubuntu at all, but the processes that have not been followed for whatever reason (I can see both the Debian and OpenSSL sides of the story) have shaken the confidence that I have in Debian based distros (at least for web facing boxes, such as Web/Mailservers).

I'm mid-migration to Fedora for external facing boxes for this reason(don't really need the support/cost hence RHEL unnecessary).

I love/Loved working with Ubuntu, but if this slip has occurred others are likely to have for the same reason. I'll still leave the distro on some home Desktops, but Ubuntu/Debian servers are out of the game for me...

scorp123
May 25th, 2008, 08:27 PM
I'll still leave the distro on some home Desktops, but Ubuntu/Debian servers are out of the game for me... Yeap, same here. It's a pity.

charlemagne86
May 29th, 2008, 08:03 PM
Okay everyone,
I don't pretend to be an expert in open source, but by what I had read (in those overtly-dramatic articles) before coming here, it sounded pretty bad...

Now agreed that it may not be as bad, or that as someone mentioned, **** happens.., also I agree with all the various things i've read above...


As per me a really important thing to do right now, is to propagate the actual seriousness(or non-seriousness) of the hick-up to users who really don't use ssl or something, or a new to ubuntu/Linux..etc, ....people who after reading a couple of dramatic articles decide to jump ships and holler "Linux/Debian/Ubuntu is unsafe!"

I guess what I need to say is we need people who we know won't bother to scratch past the surface news articles to get their facts straight.




And of course there's always this thing called "learning from our past mistakes".

charlemagne86
May 29th, 2008, 08:04 PM
Okay everyone,
I don't pretend to be an expert in open source, but by what I had read (in those overtly-dramatic articles) before coming here, it sounded pretty bad...

Now agreed that it may not be as bad, or that as someone mentioned, **** happens.., also I agree with all the various things i've read above...


As per me a really important thing to do right now, is to propagate the actual seriousness(or non-seriousness) of the hick-up to users who really don't use ssl or something, or are new to ubuntu/Linux..etc, ....people who after reading a couple of dramatic articles decide to jump ships and holler "Linux/Debian/Ubuntu is unsafe!"

I guess what I need to say is we need people who we know won't bother to scratch past the surface news articles to get their facts straight.




And of course there's always this thing called "learning from our past mistakes".



EDIT:>>>OOPS ! DOUBLE POST!>>>SORRY

perfecttao
May 30th, 2008, 07:42 AM
As per me a really important thing to do right now, is to propagate the actual seriousness(or non-seriousness) of the hick-up to users who really don't use ssl or something, or a new to ubuntu/Linux..etc, ....people who after reading a couple of dramatic articles decide to jump ships and holler "Linux/Debian/Ubuntu is unsafe!"


I don't think the seriousness of this can be downplayed.

The problem now is that the vulnerability is public knowledge - thus creating a bigger issue (script kiddies everywhere will be searching for boxes to try to crack), and despite the fact that the service packs have been released there are a few key factors that people need to be aware of:

1. Anyone using a Debian based distro and who has OpenSSL installed as part of any other group of packages still NEEDS to regenerate their keys or the security vulnerability still exists in the previously generated keys.

2. The list of packages that have OpenSSL as a dependency are all vulnerable as a result of keys they generate using OpenSSL. These include:
1. Asterisk
2. BIND9
3. boxbackup
4. Cfengine
5. courier imap/pop3
6. uw-imapd
7. cryptsetup
8. csync2
9. cyrus imapd
10. dovecot
11. dropbear
12. exim4
13. ftpd-ssl
14. Generic PEM Generation
15. gitosis
16. OpenSSH (Server)
17. OpenSSH (Client)
18. OpenSWAN
19. StrongSWAN
20. OpenVPN
21. postfix
22. puppet
23. ssl-cert
24. telnetd-ssl
25. tinc
26. Tor Onion Router / Hidden Service Keys
27. encfs
28. xrdp
29. Kerberos (MIT and Heimdal)
30. pwsafe
31. slurm-llnl

3. Any machine that has exchanged a key that was generated on a Debian/Ubuntu/Knoppix/Mint, etc. etc will also have the same vulnerability - at least to man in the middle attacks. These keys also have to be regenerated.

4. Confidence - Given the breach of policy here that resulted in the vulnerability what other packages have had the same treatment. When installing a package on a linux distro there should be a degree of confidence that the package is "as the original developer intended". If any changes have been made they NEED to submitted to the original developer for review. If this has happened once, I'd be willing to bet there have been other instances...

Of course, for the majority of users these issues are largely unimportant. For example, my girlfriend has a machine behind a pfSense firewall with a strict security policy. She uses that machine for email and web and hasn't been anywhere near the command line - the degree of security risk she faces is limited due to the limited number of applications used. On the other hand, I run a webserver, mailserver, custom built firewall running SSH and VPN's and so on. The impact on me is quite vast. My girlfriend will continue to use Ubuntu as a Desktop distro - it's secure and easy to use. I will be migrating my servers though...

I would still recommend Ubuntu as a Desktop distro for the majority of users - but unfortunately I can't take any chances with the servers that I run and as such I am forced to migrate at earliest opportunity :(

Sporkman
May 30th, 2008, 03:30 PM
I would still recommend Ubuntu as a Desktop distro for the majority of users - but unfortunately I can't take any chances with the servers that I run and as such I am forced to migrate at earliest opportunity :(

To where will you be migrating? Will your new OS be free of such security lapses?

perfecttao
May 31st, 2008, 09:35 AM
To where will you be migrating? Will your new OS be free of such security lapses?

There are never any guarantees.....it's just a case of reducing the risk....There is obviously a problem with the way the Debian package maintainers have passed bugs upstream to their developers.

I like the security focus in Fedora, so will be migrating over to F9.

BigSilly
May 31st, 2008, 12:36 PM
According to the Debian news page (http://www.debian.org/News/weekly/2008/03/), they consider this issue fixed. True, or wishful thinking?

None of this would put me off using Ubuntu that's for sure, but I'm just a simple home PC user. I don't know how this has actually affected business users, and I won't pretend to understand the full ramifications of it. I can only imagine though, that there have been far worse things that got swept under the carpet in the past with Microsoft.

Hopefully, the Debian site is accurate and the issue is safely resolved now.

sweeneytodd
May 31st, 2008, 01:49 PM
what a bloody freak out, i for one don't give a damn and reckon some ms developers are trying to taint ubuntu. windows is one big hole with little bits of code swirling around banging into each other, everyone knows as soon as u plug ur computer into the net u r vulnerable so if u r that worried unplug ur computer from ur modem, thats the best security.

scorp123
May 31st, 2008, 02:17 PM
so if u r that worried unplug ur computer from ur modem, thats the best security. You can't do that with servers, and that's where this security hole really hurts the most and where this "hickup" has turned into a royal PITA ... So many keys to change, so many installed products and services which have stopped to work because you had to tamper with their SSL keys, so many new connectivity issues because SSL and SSH keys could not be changed everywhere at the same time, so many new config issues .... Really, it sucks.

sweeneytodd
May 31st, 2008, 03:06 PM
so if i understand correctly, a patch has rectified the problem but the existing software has only a limited set of keys?

i can see how this is a major problems with companies and servers with sensitive data and take back what i said b4

perfecttao
June 1st, 2008, 08:50 AM
so if i understand correctly, a patch has rectified the problem but the existing software has only a limited set of keys?

That's about the short and skinny mate....the problem is resolved as long as people have updated any keypairs that are affected manually (after patching their server)....

The concern is how many keypairs that are out there that nobody knows about....for example if someone generated a key pair for an SSL certificate for a Bank using a vulnerable copy of Debian but didn't update the keys then it would make them vulnerable.

Let's it this way....nobody would even bother trying to crack a key pair that had appropriate encryption levels...however a key with 32,767 possibilities could be done in hours (or even minutes on a fast enough machine)....that's an entirely more appealing target...

3rdalbum
June 1st, 2008, 12:02 PM
The developer asked OpenSSL if there was a problem with commenting out those lines in order to get rid of the debugger warnings. The people at OpenSSL said "There's nothing wrong with doing that [to get rid of the warnings]". Simple human error of communication.

That's entirely possible to happen in proprietary software. Who knows how many similar security flaws exist in the proprietary packages we use today?

As for the "many eyes to find security problems", most people looking at the security of a package will get the source from the project developers, not from a downstream provider, as the project's work goes to the greatest number of people.

The article is overdramatic. The flaw is in "Four open-source operating systems". Not mentioning that two of those "operating systems" are GNU/FreeBSDk and GNU/HURD - but only the Debian distributions of those. I'm sure all five of the people running Debian GNU/FreeBSDk and Debian GNU/HURD have patched their systems and generated new keys.

The article also doesn't mention that the new version of OpenSSL won't accept any of the bad keys, so unless Debian-generated keys have been transferred to Windows or Macintosh computers and are being used to access other Windows or Mac computers, there are no further security implications.

sweeneytodd
June 1st, 2008, 12:08 PM
sooo...does this mean at this moment its the newest trend for hackers to hack into servers with linux on them.

what can someone do with this, can they get into your computer and open up ur personal finances and transfer money into a swiss account?

where can i get a program to generate keys?

3rdalbum
June 1st, 2008, 12:09 PM
I really don't like this.. we have to update to the newest version of OpenSSL...

To be doubly-sure, we should also install openssl-blacklist. It's about a 5 megabyte download, but it stops any existing bad keys from working.

SunnyRabbiera
June 1st, 2008, 12:14 PM
Well if you doubt security in linux then you must doubt security in practically all Operating systems.
No OS is 100% safe, even fedora, BSD or anything.
Its how things are handled that matter, but you will find that issues in a open source system will be much faster then a closed source one.

Closed_Port
June 1st, 2008, 12:30 PM
Well if you doubt security in linux then you must doubt security in practically all Operating systems.
Isn't that exactly what one should do anyhow?



Its how things are handled that matter, but you will find that issues in a open source system will be much faster then a closed source one.
The problem is though, that with this problem, this wasn't the case. And I think that's why people should take a very hard look at how this happened, why it happened and why it went unnoticed for such a long time.

scorp123
June 1st, 2008, 03:23 PM
sooo...does this mean at this moment its the newest trend for hackers to hack into servers with linux on them. Hacking into Linux servers is nothing new at all. That has been done for ages. What's new is the method available to the intruders now. Take a mail server for example: Many mail servers --so called "Mail Transfer Agents" aka "MTA's" which move the many hundred-thousand e-mails between all the various domains-- used to run "sendmail". And "sendmail" was known to have weaknesses. A hacker could initiate a mail connection to the "sendmail" program but instead of sending an e-mail a hacker would send specific sequences which would cause "sendmail" to crash and yield a "root" shell (older versions of "sendmail" were extremely vulnerable to this!). And voila, our hacker is in Heaven. For this reason "sendmail" was rewritten again and again and again, many people have stopped using "sendmail" and use other programs instead (qmail, exim, postfix, and many others), and it has become common practice to *NOT* run a service directly under the "root" account but instead use a non-priviledged user account such as "nobody" ... If a hacker somehow finds a weakness in the program and if he somehow manages to initiate an overflow or crash of any sorts the crashing program will not yield a "root" shell but instead the hacker will be dropped into a shell of a non-priviledged user from where he can't get anywhere. Further measures such as "AppArmor" which block applications from doing anything they are not supposed to do (= which might be a sign of manipulation by a hacker!) made hacking a well-configured and well looked after (= all the current patches always installed, the sysadmins keep an eye on the log files, etc.) Linux server really really tough and challenging. There are other well-known weak daemons, e.g. "bind", "wu-ftpd", and probably many others, and they too have been rewritten again and again over the years (e.g. "bind9" vs. the previous "bind4") or dropped out of use (e.g. people use "vsftpd" or "pure-ftpd" instead of the ever vulnerable "wu-ftpd").

But now:

Instead of having to scan for weaknesses in those programs for the small chance of finding a backdoor somehow a hacker can now walk up to your server's "front-door" (e.g. where your SSH is listening?) and crack those weak keys right away and login where key-based authentication was supposed to keep such people out. The SSL-weakness makes sure that a hacker has a really easy time guessing the right key combo (it's just a matter of a few hours on a fast machine!), so they could e.g. fabricate their own SSH keys and thus e.g. pretend to be someone who has the right to login onto a certain machine. Bingo. And once they're in they can do pretty much everything they want .... Intercept mail traffic? Check. Fake a login page and intercept valid usernames and passwords? Check. Gain access to a wide range of very very interesting information? You bet!

As administrator you should ASAP re-generate all the SSH- and SSL-keys everywhere especially if those keys originated from a Debian or Ubuntu system within the past two years.

But this is easier said than done ... First of all: Two years ... that's an awful long time in IT. Many things can happen in two years. Products and systems came and went, people came and went .... New products have been introduced, old products have gone, keys have been moved around .... Damn! You have to chase down every single change in your IT environment to make sure you get every single vulnerable key!

Also: What about some of the not-so-obvious places where vulnerable keys might have been used too? Firewall appliances, routers, people who used Debian-based Live CD's such as Knoppix and used those to configure a few things .... It's really really not easy at all to chase down all the keys and pin-point with 100% accuracy where they might still be used and which service is at risk or isn't!

Some admins I know tell me that their management has ordered them to have *ALL* keys (SSL + SSH) regenerated and redistributed and signed again "no matter what" + "just to be sure".

As I said .... this is a major PITA.


what can someone do with this IMHO they could most likely use this for phishing or to pose as another system. But getting access to systems via SSH and key-based authentication is a possibility too.

the8thstar
June 1st, 2008, 06:31 PM
I'd like to ask you guys a few questions. Please forgive me if I completely miss the point of the whole thread, as I'm not an IT specialist, just an end-user. I'm sorry to hear that the fix to this problem will generate so much correction work on server environments.

Anyway, my questions:

1. Is there a risk for the end users? If so, with what programs?

2. Assuming that Black Hats (crackers, pirates and whatnots) are as 'lazy' as developers who left the mistake unattended for 2 years, what were the chances of having something happen at all? I mean, if NO ONE checks this mistake, why would a pirate look at this particular segment of code?

3. In a recent interview, Mark Shuttleworth emphasized his desire to see more cooperation between distros, commercial companies, upstream, port-forwarding, etc. Is this wishful thinking? Could this help at all with the problem we are talking about?

Thank you for your time.

Closed_Port
June 1st, 2008, 07:07 PM
Well, I'm also a far cry from being a security specialist, but from what I understand.


1. Is there a risk for the end users? If so, with what programs?

If by end user you mean "normal" desktop user, probably not much. If you make use of ssh, you are certainly affected though and you can also be affected if other people you rely on don't change their vulnerable keys.
Here you'll find a list of applications affected:
http://wiki.debian.org/SSLkeys



2. Assuming that Black Hats (crackers, pirates and whatnots) are as 'lazy' as developers who left the mistake unattended for 2 years, what were the chances of having something happen at all? I mean, if NO ONE checks this mistake, why would a pirate look at this particular segment of code?

First off, it wasn't the developers leaving this mistake unattended, it was the Debian maintainer causing it and then leaving it unattended.
Second, openssh is a very basic and widely used security software, so for people interested in finding exploit, it's certainly one of the most obvious targets.


3. In a recent interview, Mark Shuttleworth emphasized his desire to see more cooperation between distros, commercial companies, upstream, port-forwarding, etc. Is this wishful thinking? Could this help at all with the problem we are talking about?[/COLOR][/B]

In this case, better cooperation with upstream would certainly have helped a lot. I think what one can learn from this disaster is that it isn't a good idea to patch such a sensitive package without submitting this patch upstream. This, in my opinion, was one of the key mistakes of the maintainer.