Thursday, December 29, 2005

A Timely Start has an excellent article on speeding up Perl programs: what we can and can't help with:

tile imageA well-written Perl program should, in theory, beat a shell script, right? In theory. In practice, sometimes the details of your Perl installation have more to do with why your program is slow than you might believe. Jean-Louis Leroy recently tracked down a bottleneck and wrote up his experiences with making Perl programs start faster.

(A Timely Start via

Wednesday, December 28, 2005

Are Computer-Security Export Controls Back?

Schneier on Security: Are Computer-Security Export Controls Back?:

I thought U.S. export regulations were finally over and done with, at least for software. Maybe not:

Unfortunately, due to strict US Government export regulations Symantec is only able to fulfill new LC5 orders or offer technical support directly with end-users located in the United States and commercial entities in Canada, provided all screening is successful.

Commodities, technology or software is subject to U.S. Dept. of Commerce, Bureau of Industry and Security control if exported or electronically transferred outside of the USA. Commodities, technology or software are controlled under ECCN 5A002.c.1, cryptanalytic.

You can also access further information on our web site at the following address:

The software in question is the password breaking and auditing tool called LC5, better known as L0phtCrack.

Anyone have any ideas what's going on, because I sure don't.

(Via Schneier on Security.)

Tuesday, December 27, 2005

Identity Information Theft versus Identity Theft

Kim Cameron's Identity Weblog: Identity Information Theft versus Identity Theft:

Dave Kearns'still has a'bee in his bonnet about'my use of the phrase "Identity Theft".' He takes Sun's Sara Gates and me to task in a surrealistic'portrait of'us as'dopplegangers mezmerized by opinion polls.'''

If I understand'him right,'he is arguing'that'"identity theft" sensationalizes something banal and inevitable.' We should'drop'the phrase'and talk in terms of'property theft.' Property theft being as old as the hills, why should theft of information stored on computers surprise anyone?''Dave seems to think that'attempting'to'eliminate theft'of any kind'is about as likely to succeed'as'attempts to eliminate sex, drugs or rock and roll.' So why waste effort?

Similarly, he wants us to'return to the notion of good old fashioned'fraud, perhaps not as'venerable as pure property theft, but still an activity with a long past and clearly unrelated to what we, as technologists, might do or not do:

"Only once we're past the discussion of property theft mis-named as identity theft can we get to the real problem - identity fraud and how to combat it. But identity fraud happens one instance at a time, so it isn't as sexy for the budding Pulitzer Prize winner to write about."

As usual with Dave Kearns, there is an undeniable truth to what he says.' We have to admit that it is not actually "an identity" which is stolen in a data breach, but rather identity information which might potentially be used for phraud.' But so what?' The words don't matter as much as the underlying phenomena.

Apparently to underline his point Dave links to a press release from'ID Analytics, Inc.' When I went to their site I found this:

"The findings detailed in the cornerstone 'National Data Breach Analysis' indicate that different data breaches pose different degrees of risk. In fact, certain types of data breaches may not present a high degree of risk to your customers.

Wow!' That's a relief.' But wait.' Bad news:

"If your organization has suffered a data breach, the implications are serious:

  • Erosion of customer trust
  • Undesirable publicity
  • Legal/regulatory liability
  • Added financial obligations or responsibility

Ah.' But maybe good news:

"Realities of a Data Breach

"After conducting the first-ever post-breach data analysis into a series of separate data breaches, ID Analytics is in an unprecedented position to help organizations truly asses the degree of risk associated with a breach they have experienced. While data breaches can be the first and most serious issue facing an organization, the findings detailed in the cornerstone "National Data Breach Analysis" indicate that different data breaches pose different degrees of risk. In fact, certain types of data breaches may not present a high degree of risk to your customers.

Scientists can help me!

"ID Analytics Services

"ID Analytics Breach Analysis Services involve a series of rigorous analytical assessments made possible only through the use of ID Analytics' patented Graph Theoretic Anomaly Detection (GTAD®) technology and the membership-based ID Network™.

  • Isolate Data Breach.'' Following an initial confidential briefing, ID Analytics fraud experts will help determine which customer identities must be analyzed for risk of identity theft.
  • Identity Risk Assessment. ID Analytics' scientists, leveraging the power of the ID Network, will employ GTAD technology to determine if the isolated customer data set has been misused in an organized fashion. Organized misuse is a reliable indication of the potential for ongoing identity theft. If no organized misuse is detected, ID Analytics will deliver documented certification that the customer data set, as of that date, shows no indications of being misused in a suspicious or fraudulent manner.
  • Victim Action List. If organized misuse is detected, ID Analytics will produce a list of impacted identities, allowing the breached organization to deliver victim assistance directly to those that need it.
  • Ongoing Monitoring. ID Analytics will continually monitor the entire breached customer data set to detect any further misuse of sensitive identity information, both for previous and new victims.


  • Receive reliable indication of whether or not breached data is being used to perpetrate identity fraud or identity theft.
  • Determine the risk of harm associated with a data breach and devise risk-adjusted actions.
  • Deliver effective and specific communications to impacted customers regarding anticipated harm and remedies pursued.
  • Ensure a conclusion to the breach episode through ongoing protection and certification.

"Data breaches are an unfortunate reality in the information age. Even organizations that have invested enormous sums in security are not immune to the threat.

"ID Analytics can discretely assist organizations in understanding the true impact of a data breach to its customers, which can lead to informed and appropriate decisions about how to manage the aftermath."

Sorry -'I forget why the existence of a company paying "scientists" to discreetly "ensure a conclusion to breach episodes"'really proves'Dave's point that all we are dealing with here is a glitch on the PR machine.

I'think'our systems are being attacked more methodically, from more directions, more often and by a more professional'enemy than has ever been the case, and I think these attacks will, if nothing else changes, get progressively worse over the next couple of decades.' This leads me to think it's time to ring the alarm bells and act.''Who cares if we say "identity theft" or "identity information theft", as long as the alarm bells sound?'

Whatever we call it,'our systems are being breached, and we need to work to make them qualitatively more resiliant.' The proposals for an identity metasystem for the Internet are intended to'bring about'a'holistic alternative to the current ad hoc environment.

In the meantime, there will be more breaches, and those writing about them will not be Chicken Littles yelling that the sky is falling.

[tags: , , , ]

(Via Kim Cameron's Identity Weblog.)

Internet Explorer Sucks

Schneier on Security: Internet Explorer Sucks:

This study is from August, but I missed it. The researchers tracked three browsers (MSIE, Firefox, Opera) in 2004 and counted which days they were "known unsafe." Their definition of "known unsafe": a remotely exploitable security vulnerability had been publicly announced and no patch was yet available.

MSIE was 98% unsafe. There were only 7 days in 2004 without an unpatched publicly disclosed security hole.

Firefox was 15% unsafe. There were 56 days with an unpatched publicly disclosed security hole. 30 of those days were a Mac hole that only affected Mac users. Windows Firefox was 7% unsafe.

Opera was 17% unsafe: 65 days. That number is accidentally a little better than it should be, as two of the upatched periods happened to overlap.

This underestimates the risk, because it doesn't count vulnerabilities known to the bad guys but not publicly disclosed (and it's foolish to think that such things don't exist). So the "98% unsafe" figure for MSIE is generous, and the situation might be even worse.


(Via Schneier on Security.)

Idiotic Article on TPM

Schneier on Security: Idiotic Article on TPM:

This is just an awful news story.

"TPM" stands for "Trusted Platform Module." It's a chip that may soon be in your computer that will try to enforce security: both your security, and the security of software and media companies against you. It's complicated, and it will prevent some attacks. But there are dangers. And lots of ways to hack it. (I've written about TPM here, and here when Microsoft called it Palladium. Ross Anderson has some good stuff here.)

In fact, with TPM, your bank wouldn’t even need to ask for your username and password -- it would know you simply by the identification on your machine.

Since when is "your computer" the same as "you"? And since when is identifying a computer the same as authenticating the user? And until we can eliminate bot networks and "owned" machines, there's no way to know who is controlling your computer.

Of course you could always “fool” the system by starting your computer with your unique PIN or fingerprint and then letting another person use it, but that’s a choice similar to giving someone else your credit card.

Right, letting someone use your computer is the same as letting someone use your credit card. Does he have any idea that there are shared computers that you can rent and use? Does he know any families that share computers? Does he ever have friends who visit him at home? There are lots of ways a PIN can be guessed or stolen.

Oh, I can't go on.

My guess is the reporter was fed the story by some PR hack, and never bothered to check out if it were true.

(Via Schneier on Security.)

Monday, December 19, 2005

The Military is Spying on Americans

Schneier on Security: The Military is Spying on Americans:

The Defense Department is collecting data on perfectly legal, peaceful, anti-war protesters.

The DOD database obtained by NBC News includes nearly four dozen anti-war meetings or protests, including some that have taken place far from any military installation, post or recruitment center. One "incident" included in the database is a large anti-war protest at Hollywood and Vine in Los Angeles last March that included effigies of President Bush and anti-war protest banners. Another incident mentions a planned protest against military recruiters last December in Boston and a planned protest last April at McDonald's National Salute to America's Heroes -- a military air and sea show in Fort Lauderdale, Fla.

The Fort Lauderdale protest was deemed not to be a credible threat and a column in the database concludes: "US group exercising constitutional rights." Two-hundred and forty-three other incidents in the database were discounted because they had no connection to the Department of Defense -- yet they all remained in the database.

The DOD has strict guidelines (.PDF link), adopted in December 1982, that limit the extent to which they can collect and retain information on U.S. citizens.

Still, the DOD database includes at least 20 references to U.S. citizens or U.S. persons. Other documents obtained by NBC News show that the Defense Department is clearly increasing its domestic monitoring activities. One DOD briefing document stamped “secret” concludes: "[W]e have noted increased communication and encouragement between protest groups using the [I]nternet," but no "significant connection" between incidents, such as “reoccurring instigators at protests” or "vehicle descriptions."

Personally, I am very worried about this increase in military activity inside our country. If anyone should be making sure protesters stay on the right side of the law, it's the police...not the military.

And it could get worse.

EDITED TO ADD (12/16): There's also this news :

Months after the Sept. 11 attacks, President Bush secretly authorized the National Security Agency to eavesdrop on Americans and others inside the United States to search for evidence of terrorist activity without the court-approved warrants ordinarily required for domestic spying, according to government officials.....

Mr. Bush's executive order allowing some warrantless eavesdropping on those inside the United States including American citizens, permanent legal residents, tourists and other foreigners is based on classified legal opinions that assert that the president has broad powers to order such searches, derived in part from the September 2001 Congressional resolution authorizing him to wage war on Al Qaeda and other terrorist groups, according to the officials familiar with the N.S.A. operation.


....officials familiar with it said the N.S.A. eavesdropped without warrants on up to 500 people in the United States at any given time. The list changes as some names are added and others dropped, so the number monitored in this country may have reached into the thousands over the past three years, several officials said. Overseas, about 5,000 to 7,000 people suspected of terrorist ties are monitored at one time, according to those officials.

This is a very long article, but worth reading. It is not overstatement to suggest that this may be the most significant violation of federal surveillance law in the post-Watergate era.

EDITED TO ADD (12/16): Good analysis from Political Animal. The reason Bush's executive order is a big deal is because it's against the law.

Here is the Foreign Intelligence Surveillance Act. Its Section 1809a makes it a criminal offense to "engage in electronic surveillance under color of law except as authorized by statute."

FISA does authorize surveillance without a warrant, but not on US citizens (with the possible exception of citizens speaking from property openly owned by a foreign power; e.g., an embassy.)

FISA also says that the Attorney General can authorize emergency surveillance without a warrant when there is no time to obtain one. But it requires that the Attorney General notify the judge of that authorization immediately, and that he (and yes, the law does say 'he') apply for a warrant "as soon as practicable, but not more than 72 hours after the Attorney General authorizes such surveillance."

It also says this:

"In the absence of a judicial order approving such electronic surveillance, the surveillance shall terminate when the information sought is obtained, when the application for the order is denied, or after the expiration of 72 hours from the time of authorization by the Attorney General, whichever is earliest. In the event that such application for approval is denied, or in any other case where the electronic surveillance is terminated and no order is issued approving the surveillance, no information obtained or evidence derived from such surveillance shall be received in evidence or otherwise disclosed in any trial, hearing, or other proceeding in or before any court, grand jury, department, office, agency, regulatory body, legislative committee, or other authority of the United States, a State, or political subdivision thereof".

Nothing in the New York Times report suggests that the wiretaps Bush authorized extended only for 72 hours, or that normal warrants were sought in each case within 72 hours after the wiretap began. On the contrary, no one would have needed a special program or presidential order if they had.

According to the Times, "the Bush administration views the operation as necessary so that the agency can move quickly to monitor communications that may disclose threats to the United States." But this is just wrong. As I noted above, the law specifically allows for warrantless surveillance in emergencies, when the government needs to start surveillance before it can get a warrant. It explains exactly what the government needs to do under those circumstances. It therefore provides the flexibility the administration claims it needed.

They had no need to go around the law. They could easily have obeyed it. They just didn't want to.

(Via Schneier on Security.)

Wednesday, December 14, 2005

Bill Will Keep New Drivers Off Phones

The Wisconsin Legislature is considering Assembly Bill 120, which would ban new drivers from using their cell phones while driving. In related news, AB 121 will ban parents from driving with children; AB 122 bans driving while listening to music; and AB 123 bans driving while not staring bug-eyed at the road.

Weakest Link Security

Schneier on Security: Weakest Link Security:

Funny story:

At the airport where this pilot fish works, security has gotten a lot more attention since 9/11. "All the security doors that connect the concourses to office spaces and alleyways for service personnel needed an immediate upgrade," says fish. "It seems that the use of a security badge was no longer adequate protection.

"So over the course of about a month, more than 50 doors were upgraded to require three-way protection. To open the door, a user needed to present a security badge (something you possess), a numeric code (something you know) and a biometric thumb scan (something you are).

"Present all three, and the door beeps and lets you in."

One by one, the doors are brought online. The technology works, and everything looks fine -- until fish decides to test the obvious.

After all, the average member of the public isn't likely to forge a security badge, guess a multidigit number and fake a thumb scan. "But what happens if you just turn the handle without any of the above?" asks fish. "Would it set off alarms or call security?

"It turns out that if you turn the handle, the door opens.

"Despite the addition of all that technology and security on every single door, nobody bothered to check that the doors were set to lock by default."

Remember, security is only as strong as the weakest link.

(Via Schneier on Security.)

Friday, December 09, 2005

Planet Perl: Leon Brocard: Open source zealots: ...

Planet Perl: Leon Brocard: Open source zealots:

This is something I've seen on other projects, but never experienced myself before until now: open source zealots. These are people who will complain for months that if project Y were open source, then they would hack on it and improve it. To get them to stop whining, you open source the project and of course all the people who said they would contribute code do not. A month a whining and no code! As pointed out in the meeting yesterday, it's not a total loss: at least they've stopped whining ;-)

The fog effect in RealLife looks particularly good today - almost as good as in the latest Harry Potter film. Here's hoping that no dragons come swooping out of this fog...

ObPerl: Image::Imlib2 doesn't support blending two images together, so I had to use Image::Magick yesterday, erk!

(Via Planet Perl.)

Schneier on Security: E-Hijacking:

The article is a bit inane, but it talks about an interesting security problem. "E-hijacking" is the term used to describe the theft of goods in transit by altering the electronic paperwork:

He pointed to the supposed loss of 3.9-million banking records stored on computer backup tapes that were being shipped by UPS from New York-based Citigroup to an Experian credit bureau in Texas. “These tapes were not lost – they were stolen,” Spoonamore said. “Not only were they stolen, the theft occurred by altering the electronic manifest in transit so it would be delivered right to the thieves.” He added that UPS, Citigroup, and Experian spent four days blaming each other for losing the shipment before realizing it had actually been stolen.

Spoonamore, a veteran of the intelligence community, said in his analysis of this e-hijacking, upwards of 15 to 20 people needed to be involved to hack five different computer systems simultaneously to breach the electronic safeguards on the electronic manifest. The manifest was reset from “secure” to “standard” while in transit, so it could be delivered without the required three signatures, he said. Afterward the manifest was put back to “secure” and three signatures were uploaded into the system to appear as if proper procedures had been followed.

“What’s important to remember here is that there is no such thing as ‘security’ in the data world: all data systems can and will be breached,” Spoonamore said. “What you can have, however, is data custody so you know at all times who has it, if they are supposed to have it, and what they are doing with it. Custody is what begets data security.”

This is interesting. More and more, the physical movement of goods is secondary to the electronic movement of information. Oil being shipped across the Atlantic, for example, can change hands several times while it is in transit. I see a whole lot of new risks along these lines in the future.

(Via Schneier on Security.)

Friday, December 02, 2005

EFF: Breaking News: Diebold Attempts to Evade Election Transparency Laws:

EFF Goes to Court to Force E-voting Company to Comply With Strict New North Carolina Law

Raleigh, North Carolina - The Electronic Frontier Foundation (EFF) is going to court in North Carolina to prevent Diebold Election Systems, Inc. from evading North Carolina law.

In a last-minute filing, e-voting equipment maker Diebold asked a North Carolina court to exempt it from tough new election requirements designed to ensure transparency in the state's elections. Diebold obtained an extraordinarily broad order, allowing it to avoid placing its source code in escrow with the state and identifying programmers who contributed to the code.

On behalf of North Carolina voter and election integrity advocate Joyce McCloy, EFF asked the court to force Diebold and every other North Carolina equipment vendor to comply with the law's requirements. A hearing on EFF's motion is set for Monday, November 28.

"The new law was passed for a reason: to ensure that the voters of North Carolina have confidence in the integrity and accuracy of their elections," said EFF Staff Attorney Matt Zimmerman. "In stark contrast to every other equipment vendor that placed a bid with the state, Diebold went to court complaining that it simply couldn't comply with the law. Diebold should spend its efforts developing a system that voters can trust, not asking a court to let it bypass legal requirements aimed at ensuring voting integrity."

On November 4, the day that voting equipment bids to the state were due, Diebold obtained a temporary restraining order from a North Carolina superior court, exempting it from criminal and civil liability that could have resulted from its bid. EFF, with the assistance from the North Carolina law firm of Twiggs, Beskind, Strickland & Rabenau, P.A., intervened in the case on behalf of McCloy, the founder of the North Carolina Coalition for Verified Voting. In a brief filed Wednesday, EFF argued that Diebold had failed to show why it was unable to meet various new election law provisions requiring source code escrow and identification of programmers. North Carolina experienced one of the most serious malfunctions of e-voting systems in the 2004 presidential election when over 4,500 ballots were lost in a voting system provided by Diebold competitor UniLect Corp. The new transparency and integrity provisions of the North Carolina election code were passed in response to this and other documented malfunctions that have occurred across the country.

The North Carolina Board of Elections is scheduled to announce winning voting equipment vendors on December 1, 2005.

For the brief filed in the case:


Matt Zimmerman
Staff Attorney
Electronic Frontier Foundation

(Via EFF: Breaking News.)

Schneier on Security: FBI to Approve All Software?:

Sounds implausible, I know. But how else do you explain this FCC ruling (from September -- I missed it until now):

The Federal Communications Commission thinks you have the right to use software on your computer only if the FBI approves.

No, really. In an obscure "policy" document released around 9 p.m. ET last Friday, the FCC announced this remarkable decision.

According to the three-page document, to preserve the openness that characterizes today's Internet, "consumers are entitled to run applications and use services of their choice, subject to the needs of law enforcement." Read the last seven words again.

The FCC didn't offer much in the way of clarification. But the clearest reading of the pronouncement is that some unelected bureaucrats at the commission have decreeed that Americans don't have the right to use software such as Skype or PGPfone if it doesn't support mandatory backdoors for wiretapping. (That interpretation was confirmed by an FCC spokesman on Monday, who asked not to be identified by name. Also, the announcement came at the same time as the FCC posted its wiretapping rules for Internet telephony.)

(Via Schneier on Security.)

Schneier on Security: The Human Side of Security:

A funny -- and all too true -- addition to the SANS Top 20:

H1. Humans

H1.1 Description:

The species Homo sapiens supports a wide range of intellectual capabilities such as speech, emotion, rational thinking etc. Many of these components are enabled by default - though to differing degrees of success. These components are implemented by the cerebral cortex, and are under the control of the identity engine which runs as me.exe. Vulnerabilities in these components are the most common avenues for exploitation.

(Via Schneier on Security.)

Wednesday, November 30, 2005

Schneier on Security: A Science-Fiction Movie-Plot Threat:

This has got to be the most bizarre movie-plot threat to date: alien viruses downloaded via the SETI project:

In his [Richard Carrigan, a particle physicist at the US Fermi National Accelerator Laboratory in Illinois] report, entitled "Do potential Seti signals need to be decontaminated?", he suggests the Seti scientists may be too blase about finding a signal. "In science fiction, all the aliens are bad, but in the world of science, they are all good and simply want to get in touch." His main concern is that, intentionally or otherwise, an extra-terrestrial signal picked up by the Seti team could cause widespread damage to computers if released on to the internet without being checked.

Here's his website.

Although you have to admit, it could make a cool movie

(Via Schneier on Security.)

Schneier on Security: European Terrorism Law and Music Downloaders:

The European music industry is lobbying the European Parliament, demanding things that the RIAA can only dream about:

The music and film industries are demanding that the European parliament extends the scope of proposed anti-terror laws to help them prosecute illegal downloaders. In an open letter to MEPs, companies including Sony BMG, Disney and EMI have asked to be given access to communications data - records of phone calls, emails and internet surfing - in order to take legal action against pirates and filesharers. Current proposals restrict use of such information to cases of terrorism and organised crime.

Our society definitely needs a serious conversation about the fundamental freedoms we are sacrificing in a misguided attempt to keep us safe from terrorism. It feels both surreal and sickening to have to defend our fundamental freedoms against those who want to stop people from sharing music. How is it possible that we can contemplate so much damage to our society simply to protect the business model of a handful of companies?

(Via Schneier on Security.)

Schneier on Security: Vote Someone Else's Shares:

Do you own shares of a Janus mutual fund? Can you vote your shares through a website called If so, you can vote the shares of others.

If you have a valid proxy number, you can add 1300 to the number to get another valid proxy number. Once entered, you get another person's name, address, and account number at Janus! You could then vote their shares too.

It's easy.

Probably illegal.

Definitely a great resource for identity thieves.

Certainly pathetic.

(Via Schneier on Security.)

Tuesday, November 22, 2005

Schneier on Security: Surveillance and Oversight:

Christmas 2003, Las Vegas. Intelligence hinted at a terrorist attack on New Year's Eve. In the absence of any real evidence, the FBI tried to compile a real-time database of everyone who was visiting the city. It collected customer data from airlines, hotels, casinos, rental car companies, even storage locker rental companies. All this information went into a massive database -- probably close to a million people overall -- that the FBI's computers analyzed, looking for links to known terrorists. Of course, no terrorist attack occurred and no plot was discovered: The intelligence was wrong.

A typical American citizen spending the holidays in Vegas might be surprised to learn that the FBI collected his personal data, but this kind of thing is increasingly common. Since 9/11, the FBI has been collecting all sorts of personal information on ordinary Americans, and it shows no signs of letting up.

The FBI has two basic tools for gathering information on large groups of Americans. Both were created in the 1970s to gather information solely on foreign terrorists and spies. Both were greatly expanded by the USA Patriot Act and other laws, and are now routinely used against ordinary, law-abiding Americans who have no connection to terrorism. Together, they represent an enormous increase in police power in the United States.

The first are FISA warrants (sometimes called Section 215 warrants, after the section of the Patriot Act that expanded their scope). These are issued in secret, by a secret court. The second are national security letters, less well known but much more powerful, and which FBI field supervisors can issue all by themselves. The exact numbers are secret, but a recent Washington Post article estimated that 30,000 letters each year demand telephone records, banking data, customer data, library records, and so on.

In both cases, the recipients of these orders are prohibited by law from disclosing the fact that they received them. And two years ago, Attorney General John Ashcroft rescinded a 1995 guideline that this information be destroyed if it is not relevant to whatever investigation it was collected for. Now, it can be saved indefinitely, and disseminated freely.

September 2005, Rotterdam. The police had already identified some of the 250 suspects in a soccer riot from the previous April, but most were unidentified but captured on video. In an effort to help, they sent text messages to 17,000 phones known to be in the vicinity of the riots, asking that anyone with information contact the police. The result was more evidence, and more arrests.

The differences between the Rotterdam and Las Vegas incidents are instructive. The Rotterdam police needed specific data for a specific purpose. Its members worked with federal justice officials to ensure that they complied with the country's strict privacy laws. They obtained the phone numbers without any names attached, and deleted them immediately after sending the single text message. And their actions were public, widely reported in the press.

On the other hand, the FBI has no judicial oversight. With only a vague hinting that a Las Vegas attack might occur, the bureau vacuumed up an enormous amount of information. First its members tried asking for the data; then they turned to national security letters and, in some cases, subpoenas. There was no requirement to delete the data, and there is every reason to believe that the FBI still has it all. And the bureau worked in secret; the only reason we know this happened is that the operation leaked.

These differences illustrate four principles that should guide our use of personal information by the police. The first is oversight: In order to obtain personal information, the police should be required to show probable cause, and convince a judge to issue a warrant for the specific information needed. Second, minimization: The police should only get the specific information they need, and not any more. Nor should they be allowed to collect large blocks of information in order to go on "fishing expeditions," looking for suspicious behavior. The third is transparency: The public should know, if not immediately then eventually, what information the police are getting and how it is being used. And fourth, destruction. Any data the police obtains should be destroyed immediately after its court-authorized purpose is achieved. The police should not be able to hold on to it, just in case it might become useful at some future date.

This isn't about our ability to combat terrorism; it's about police power. Traditional law already gives police enormous power to peer into the personal lives of people, to use new crime-fighting technologies, and to correlate that information. But unfettered police power quickly resembles a police state, and checks on that power make us all safer.

As more of our lives become digital, we leave an ever-widening audit trail in our wake. This information has enormous social value -- not just for national security and law enforcement, but for purposes as mundane as using cell-phone data to track road congestion, and as important as using medical data to track the spread of diseases. Our challenge is to make this information available when and where it needs to be, but also to protect the principles of privacy and liberty our country is built on.

This essay originally appeared in the Minneapolis Star-Tribune.

(Via Schneier on Security.)

EFF: Breaking News: EFF Files Class Action Lawsuit Against Sony BMG:

Company Should Repair Damage to Customers Caused by CD Software

The Electronic Frontier Foundation (EFF), along with two leading national class action law firms, today filed a lawsuit against Sony BMG, demanding that the company repair the damage done by the First4Internet XCP and SunnComm MediaMax software it included on over 24 million music CDs.

EFF is pleased that Sony BMG has taken steps in acknowledging the security risks caused by the XCP CDs, including a recall of the infected discs. However, these measures still fall short of what the company needs to do to fix the problems caused to customers by XCP, and Sony BMG has failed entirely to respond to concerns about MediaMax, which affects over 20 million CDs -- ten times the number of CDs as the XCP software.

"Sony BMG is to be commended for its acknowledgment of the serious security problems caused by its XCP software, but it needs to go further to regain the public's trust," said Corynne McSherry, EFF Staff Attorney. "It is unconscionable for Sony BMG to refuse to respond to the privacy and other problems created by the over 20 million CDs containing the SunnComm software."

The suit, to be filed in Los Angeles County Superior court, alleges that the XCP and SunnComm technologies have been installed on the computers of millions of unsuspecting music customers when they used their CDs on machines running the Windows operating system. Researchers have shown that the XCP technology was designed to have many of the qualities of a "rootkit." It was written with the intent of concealing its presence and operation from the owner of the computer, and once installed, it degrades the performance of the machine, opens new security vulnerabilities, and installs updates through an Internet connection to Sony BMG's servers. The nature of a rootkit makes it extremely difficult to remove, often leaving reformatting the computer's hard drive as the only solution. When Sony BMG offered a program to uninstall the dangerous XCP software, researchers found that the installer itself opened even more security vulnerabilities in users' machines. Sony BMG has still refused to use its marketing prowess to widely publicize its recall program to reach the over 2 million XCP-infected customers, has failed to compensate users whose computers were affected and has not eliminated the outrageous terms found in its End User Licensing Agreement (EULA).

The MediaMax software installed on over 20 million CDs has different, but similarly troubling problems. It installs files on the users' computers even if they click "no" on the EULA, and it does not include a way to fully uninstall the program. The software transmits data about users to SunnComm through an Internet connection whenever purchasers listen to CDs, allowing the company to track listening habits -- even though the EULA states that the software will not be used to collect personal information and SunnComm's website says "no information is ever collected about you or your computer." If users repeatedly requested an uninstaller for the MediaMax software, they were eventually provided one, but they first had to provide more personally identifying information. Worse, security researchers recently determined that SunnComm's uninstaller creates significant security risks for users, as the XCP uninstaller did.

"Music fans shouldn't have to install potentially dangerous, privacy intrusive software on their computers just to listen to the music they've legitimately purchased," said EFF Legal Director Cindy Cohn. "Regular CDs have a proven track record -- no one has been exposed to viruses or spyware by playing a regular audio CD on a computer. Why should legitimate customers be guinea pigs for Sony BMG's experiments?"

"Consumers have a right to listen to the music they have purchased in private, without record companies spying on their listening habits with surreptitiously-installed programs," added EFF Staff Attorney Kurt Opsahl, "Between the privacy invasions and computer security issues inherent in these technologies, companies should consider whether the damage done to consumer trust and their own public image is worth its scant protection."

Both the XCP and MediaMax CDs include outrageous, anti-consumer terms in their "clickwrap" EULAs. For example, if purchasers declare personal bankruptcy, the EULA requires them to delete any digital copies on their computers or portable music players. The same is true if a customer's house gets burglarized and his CDs stolen, since the EULA allows purchasers to keep copies only so long as they retain physical possession of the original CD. EFF is demanding that Sony BMG remove these unconscionable terms from its EULAs.

The law firms of Green Welling, LLP, and Lerach, Coughlin, Stoia, Geller, Rudman and Robbins, LLP, joined EFF in the case. Sony BMG is also facing at least six other class action lawsuits nationwide and an action by the Texas Attorney General. EFF looks forward to representing the voice of digital music fans in the resolution of these disputes between Sony BMG and consumers.

For more on the Sony BMG litigation, see:

EFF's open letter to Sony:

(Via EFF: Breaking News.)

Tuesday, November 15, 2005

Schneier on Security: Still More on Sony's DRM Rootkit:

This story is just getting weirder and weirder (previous posts here and here).

Sony already said that they're stopping production of CDs with the embedded rootkit. Now they're saying that they will pull the infected disks from stores and offer free exchanges to people who have inadvertently bought them.

Sony BMG Music Entertainment said Monday it will pull some of its most popular CDs from stores in response to backlash over copy-protection software on the discs.

Sony also said it will offer exchanges for consumers who purchased the discs, which contain hidden files that leave them vulnerable to computer viruses when played on a PC.

That's good news, but there's more bad news. The patch Sony is distributing to remove the rootkit opens a huge security hole:

The root of the problem is a serious design flaw in Sony’s web-based uninstaller. When you first fill out Sony’s form to request a copy of the uninstaller, the request form downloads and installs a program – an ActiveX control created by the DRM vendor, First4Internet – called CodeSupport. CodeSupport remains on your system after you leave Sony’s site, and it is marked as safe for scripting, so any web page can ask CodeSupport to do things. One thing CodeSupport can be told to do is download and install code from an Internet site. Unfortunately, CodeSupport doesn’t verify that the downloaded code actually came from Sony or First4Internet. This means any web page can make CodeSupport download and install code from any URL without asking the user’s permission.

Even more interesting is that there may be at least half a million infected computers:

Using statistical sampling methods and a secret feature of XCP that notifies Sony when its CDs are placed in a computer, [security researcher Dan] Kaminsky was able to trace evidence of infections in a sample that points to the probable existence of at least one compromised machine in roughly 568,200 networks worldwide. This does not reflect a tally of actual infections, however, and the real number could be much higher.

I say "may be at least" because the data doesn't smell right to me. Look at the list of infected titles, and estimate what percentage of CD buyers will play them on their computers; does that seem like half a million sales to you? It doesn't to me, although I readily admit that I don't know the music business. Their methodology seems sound, though:

Kaminsky discovered that each of these requests leaves a trace that he could follow and track through the internet's domain name system, or DNS. While this couldn't directly give him the number of computers compromised by Sony, it provided him the number and location (both on the net and in the physical world) of networks that contained compromised computers. That is a number guaranteed to be smaller than the total of machines running XCP.

His research technique is called DNS cache snooping, a method of nondestructively examining patterns of DNS use. Luis Grangeia invented the technique, and Kaminsky became famous in the security community for refining it.

Kaminsky asked more than 3 million DNS servers across the net whether they knew the addresses associated with the Sony rootkit --, and He uses a "non-recursive DNS query" that allows him to peek into a server's cache and find out if anyone else has asked that particular machine for those addresses recently.

If the DNS server said yes, it had a cached copy of the address, which means that at least one of its client computers had used it to look up Sony's digital-rights-management site. If the DNS server said no, then Kaminsky knew for sure that no Sony-compromised machines existed behind it.

The results have surprised Kaminsky himself: 568,200 DNS servers knew about the Sony addresses. With no other reason for people to visit them, that points to one or more computers behind those DNS servers that are Sony-compromised. That's one in six DNS servers, across a statistical sampling of a third of the 9 million DNS servers Kaminsky estimates are on the net.

In any case, Sony's rapid fall from grace is a great example of the power of blogs; it's been fifteen days since Mark Russinovich first posted about the rootkit. In that time the news spread like a firestorm, first through the blogs, then to the tech media, and then into the mainstream media.

(Via Schneier on Security.)

Schneier on Security: The Security of Tin Foil Hats:


Abstract: Among a fringe community of paranoids, aluminum helmets serve as the protective measure of choice against invasive radio signals. We investigate the efficacy of three aluminum helmet designs on a sample group of four individuals. Using a $250,000 network analyser, we find that although on average all helmets attenuate invasive radio frequencies in either directions (either emanating from an outside source, or emanating from the cranium of the subject), certain frequencies are in fact greatly amplified. These amplified frequencies coincide with radio bands reserved for government use according to the Federal Communication Commission (FCC). Statistical evidence suggests the use of helmets may in fact enhance the government's invasive abilities. We theorize that the government may in fact have started the helmet craze for this reason.

And a rebuttal:

A recent MIT study [1] calls into question the effectiveness of Aluminum Foil Deflector Beanies. However, there are serious flaws in this study, not the least of which is a complete mischaracterization of the process of psychotronic mind control. I theorize that the study is, in fact, NWO propaganda designed to spread FUD against deflector beanie technology, and aluminum shielding in general, in order to disembeanie paranoids, leaving them open to mind control.

(Via Schneier on Security.)

Schneier on Security: More on Sony's DRM Rootkit:

Here's the story, edited to add lots of news.

There will be lawsuits. (Here's the first.) Police are getting involved. There's a Trojan that uses Sony's rootkit to hide. And today Sony temporarily halted production of CDs protected with this technology.

Sony really overreached this time. I hope they get slapped down hard for it.

EDITED TO ADD (13 Nov): More information on uninstalling the rootkit. And Microsoft will update its security tools to detect and remove the rootkit. That makes a lot of sense. If Windows crashes because of this -- and others of this ilk -- Microsoft will be blamed.

(Via Schneier on Security.)

Wednesday, November 09, 2005

Mark's Sysinternals Blog: Sony: You don’t reeeeaaaally want to uninstall, do you?:

A few days after I posted my first blog entry on Sony’s rootkit, Sony and Rootkits: Digital Rights Management Gone Too Far, Sony announced to the press that it was making available a decloaking patch and uninstall capability through its support site. Note that I said press and not customer. The uninstall process Sony has put in place is on par with mainstream spyware and adware and is the topic of this blog post.

As I’ve stated several times already, Sony’s rootkit hides the Digital Rights Management (DRM) files from users that have it installed, so users not monitoring the developments in this story are unaware of the scope and intrusiveness of the DRM. The End User License Agreement (EULA) does not provide any details on the software or its cloaking. Further, the software installation does not include support information and lacks a registration option, making it impossible for users to contact Sony and Sony to contact its users.

What if a user somehow discovers the hidden files, makes the connection between files and the Sony CD that installed them, and visits Sony BMG’s site in search of uninstall or support information? Or what about the unsuspecting Sony DRM user that happens to visit the Sony BMG site to look at their other offerings? Will these customers learn about the patch and uninstaller?

See for yourself. Visit and search for the support site Sony has made available to the press. There’s no information on this story anywhere on the front page, no support link, and the FAQ only contains information about Sony’s merger with BMG. The fact that Sony’s announcement was directed at the press and that they’ve made no effort to make contact with their customers makes the patch and uninstall look solely like a public relations gesture for the media.

Sony even gives those users like me that are aware of the “uninstaller” several hurdles to jump over. First you have to go to Sony’s support site, guess that the uninstall information is in the FAQ, click on the uninstall link and then fill out a form with your email address and purchasing information, possibly adding yourself to Sony’s marketing lists in the process.

Then, after you submit the information the site takes you to a page that notifies you that you’ll be receiving an email with a “Case ID”. A few minutes later you receive that email, which directs you to install the patch and then visit another page if you still really want to uninstall. That page requires you to install an ActiveX control, CodeSupport.Ocx, that’s signed by First 4 Internet, enter your case ID and fill in the reason for your request. Then you receive an email within a few minutes that informs you that a customer service representative will email you uninstall instructions within one business day.

When you eventually receive the uninstall email from Sony BMG support it comes with a cryptic link in the form (I’ve modified the link so it doesn’t work) to your personalized uninstall page. Interestingly, the email address has a confidentially notice, which implies to me that Sony has something to hide, and it informs you that the uninstaller will expire in one week.

If you visit the uninstall page from the computer where you filled out the first uninstall form then the DRM software is deleted from your system. However, if you visit it from another computer the page requires you install the same CodeSupport ActiveX control as the uninstall-request page, but then even if the computer has the DRM software installed you get this error:

Besides the obvious question of why there’s not a universal uninstall link, the error also begs the question of how the Sony site knows that the uninstall link is for a different computer? For that matter, why do you have to install an ActiveX control just to fill out a web form and why does that form have to be filled out “using the computer where the software is currently installed”? The email, web page and ActiveX control offer no hints.

I of course decided to investigate. A network trace of the ActiveX control’s communication with the Sony site using Ethereal reveals that the control sends Sony an encrypted block of data:

A Regmon trace of the ActiveX control’s activity when you press the submit button on the Web page reveals that the encrypted data is actually a signature that the control derives from the hardware configuration of your computer:

The uninstall link Sony sends you has your case ID encrypted in the address and when you visit the uninstall page the ActiveX control sends the hardware signature to Sony’s site. If the signature doesn’t match the one it stored earlier with your Case ID when you made the second uninstall request the site informs you that there’s a case ID mismatch.

While I’ve answered the question of how the uninstaller knows if the uninstall link is for your computer, I can’t definitively answer questions like:

  1. Why isn’t Sony publicizing the uninstall link on their site in any way?
  2. Why do you have to tell Sony twice that you want to uninstall?
  3. Why is the email with the uninstall link labeled confidential?
  4. Why does Sony generate a unique uninstall link for each computer?
Sony has left us to speculate, but under the circumstances the answer to all these questions seems obvious: Sony doesn’t want customers to know that there’s DRM software installed on their computers and doesn’t want them to uninstall it if they somehow discover it. Without exaggeration I can say that I’ve analyzed virulent forms of spyware/adware that provide more straightforward means of uninstall.

For those readers that are coming up to speed with the story, here’s a summary of important developments so far:

The DRM software Sony has been shipping on many CDs since April is cloaked with rootkit technology:
  • Sony denies that the rootkit poses a security or reliability threat despite the obvious risks of both
  • Sony claims that users don’t care about rootkits because they don’t know what a rootkit is
  • The installation provides no way to safely uninstall the software
  • Without obtaining consent from the user Sony’s player informs Sony every time it plays a “protected” CD
Sony has told the press that they’ve made a decloaking patch and uninstaller available to customers, however this still leaves the following problems:

  • There is no way for customers to find the patch from Sony BMG’s main web page
  • The patch decloaks in an unsafe manner that can crash Windows, despite my warning to the First 4 Internet developers
  • Access to the uninstaller is gated by two forms and an ActiveX control
  • The uninstaller is locked to a single computer, preventing deployment in a corporation
Consumers and antivirus companies are responding:

  • F-Secure independently identified the rootkit and provides information on its site
  • Computer Associates has labeled the Sony software “spyware”
  • A lawfirm has filed a class action lawsuit on behalf of California consumers against Sony
  • ALCEI-EFI, an Italian digital-rights advocacy group, has formally asked the Italian government to investigate Sony for possible Italian law violations

(Via Mark's Sysinternals Blog.)

Tuesday, November 08, 2005

Mark's Sysinternals Blog: Sony’s Rootkit: First 4 Internet Responds:

First 4 Internet, the company that implements Sony’s Digital Rights Management (DRM) software that includes a rootkit, has responded to my last post, More on Sony: Dangerous Decloaking Patch, EULAs and Phoning Home. They rebut four of the points I raise in the post. Their first statement relates to my assertion that Sony’s player contacts Sony’s web site each time it runs and sends the site an ID associated with the CD the user is playing:

The player has a standard rotating banner that connects the user to additional content (e.g. provides a link to the artist web site). The player simply looks online to see if another banner is available for rotation. The communication is one-way in that a banner is simply retrieved from the server if available. No information is ever fed back or collected about the consumer or their activities.

I speculated that the player sends Sony’s web site a CD identifier as part of a check to see if new song lyrics or artwork was available, which they essentially confirm. Their claim that the communication is “one way” from Sony’s web site is false, however, since Sony can make a record of each time their player is used to play a CD, which CD is played, and what computer is playing the CD. If they’ve configured standard Web server logging then they are doing that. As I stated earlier, I doubt Sony is using this information to track user behavior, but the information allows them to do so. In any case, First 4 Internet cannot claim what Sony is or is not doing with the information since they do not control those servers, and the First 4 Internet response fails to address the fact that the End User License Agreement (EULA) and Sony executives either make no mention of the “phone home” behavior or explicitly deny it.

Another point that I made in the post is that the decloaking patch that Sony has made available weighs in at a relatively large 3.5 MB because it not only removes the rootkit, it also replaces most of the DRM files with updated versions. First 4 Internet responded with this:

In addition to removing the cloaking, Service Pack 2 includes all fixes from the earlier Service Pack 1 update. In order to ensure a secure installation, Service Pack 2 includes the newest version of all DRM components, hence the large file size for the patch. We have updated the language on our web site to be clearer on this point.

It’s not clear to me what they mean by “a secure installation”, but like most of the disclosure in this story, they’ve acknowledged the updating nature of the patch only after someone else has disclosed it first. What’s also lost in their response is that Sony DRM users not following this story as it develops have no way of knowing that there’s a patch available or that they even have software installed that requires a patch.

Further, Sony’s patch is dangerous because the way that it removes the cloak could crash Windows. I discussed the flaw in the patch’s decloaking method in the first post and again in the last one (I also provide a simple way for users to remove the cloak safely), yet First 4 Internet refuses to recognize it. They contest my claim in their comment:

This is pure conjecture. F4I is using standard Windows commands (net stop) to stop their driver. Nothing more.

While the probability of a crash is relatively small, its not “pure conjecture”, but fundamental to multithreaded programming concepts. Anyone that writes Windows device driver code must have a firm grasp of these concepts or they can easily introduce bugs and security holes into Windows. Here’s one of many scenarios that will lead to a crash when the patch decloaks Sony’s rootkit:

  1. Thread A invokes one of the functions that Aries.sys, the Sony rootkit driver developed by First 4 Internet, has redirected
  2. Thread A reads the address of the redirected function from the system service table, which points at the rootkit function in Aries.sys
  3. Thread A executes the first few instructions of the Aries.sys function, which is enough to enter the driver, but not enough to execute the Aries.sys code that attempts to track threads running within it
  4. Thread A is context swapped off the CPU by the Windows scheduler
  5. The scheduler gives thread B the CPU, which executes the patch’s “unload driver” command, unloading the Aries.sys driver from memory
  6. The scheduler runs thread A again, which executes memory that previously held the contents of Aries.sys, but is now invalid or holds other code or data
  7. Windows detects thread A’s illegal execution and crashes the system with a blue screen
First 4 Internet’s failure to imagine this control flow is consistent with their general failure to understand Windows device driver programming.

As further evidence of this, I’ve performed further testing of the Aries.sys driver using a program I wrote, NTCrash2, and found that Aries.sys fails to perform basic checks on the data passed to it by applications. NTCrash2 passes randomly-generated invalid data to Windows APIs and on a stock Windows system simply receives error codes from the APIs. However, when NTCrash2 runs on a system that has the Sony rootkit installed Windows crashes. Here’s an example Windows blue screen that identifies Aries.sys as the cause of a crash that occurred while NTCrash2 ran:

Besides demonstrating the ineptitude of the First 4 Internet programmers, this flaw highlights my message that rootkits create reliability risks in addition to security risks. Because the software package that installed the rootkit is hidden when Windows is running (in this case Sony’s DRM software), and even if exposed not clearly identified, if an application triggers one of Aries.sys’s bugs a user would have no way of associating the driver responsible for the resulting crash with any software package they have installed on their system. The user would therefore be unable to conclusively diagnose the cause of the crash, check to see if they have the most recent version of the driver or of uninstalling the driver.

First 4 Internet and Sony also continue to argue that the rootkit poses no security vulnerability, repeating it in the description of the patch download. Any software that hides files, processes, and registry keys based on a prefix of letters can clearly be used by malicious software.

First 4 Internet’s final rebuttal relates to my complaint that as part of a request to uninstall their DRM software Sony requires you to submit your email address to their marketing lists. First 4 Internet says:

An email address is required in order to send the consumer the uninstall utility. The wording on the web site is the standard Sony BMG corporate privacy policy that is put on all Sony web sites. Sony BMG does nothing with the customer service data (email addresses) other than use them to respond to the consumer.

The Sony privacy policy the comment refers to clearly states that Sony may add a user’s email address to their marketing lists:

Except on sites devoted to particular recording artists, we may share the information we collect from you with our affiliates or send you e-mail promotions and special offers from reputable third parties in whose products and services we think you may have an interest. We may also share your information with reputable third-parties who may contact you directly.

Again, the fact is that most users of Sony’s DRM won’t realize that they even have software that can be uninstalled. Also, the comment does not explain why Sony won’t simply make the uninstaller available as a freely accessible download like they do the patch, nor why users have to submit two requests for the uninstaller and then wait for further instructions to be emailed (I still have not received the uninstaller). The only motivation I can see for this is that Sony hopes you’ll give up somewhere in the process and leave their DRM software on your system. I’ve seen similar strategies used by adware programs that make it difficult, but not impossible, for you to remove them.

Instead of admitting fault for installing a rootkit and installing it without proper disclosure, both Sony and First 4 Internet claim innocence. By not coming clean they are making clear to any potential customers that they are a not only technically incompetent, but also dishonest.

(Via Mark's Sysinternals Blog.)

Schneier on Security: The FBI is Spying on Us:

From TalkLeft:

The Washington Post reports that the FBI has been obtaining and reviewing records of ordinary Americans in the name of the war on terror through the use of national security letters that gag the recipients.

Merritt's entire post is worth reading.

The closing:

The ACLU has been actively litigating the legality of the National Security Letters. Their latest press release is here.

Also, the ACLU is less critical than I am of activity taking place in Congress now where conferees of the Senate and House are working out a compromise version of Patriot Act extension legislation that will resolve differences in versions passed by each in the last Congress. The ACLU reports that the Senate version contains some modest improvements respecting your privacy rights while the House version contains further intrusions. There is still time to contact the conferees. The ACLU provides more information and a sample letter here.

History shows that once new power is granted to the government, it rarely gives it back. Even if you wouldn't recognize a terrorist if he were standing in front of you, let alone consort with one, now is the time to raise your voice.

EDITED TO ADD: Here's a good personal story of someone's FBI file.

(Via Schneier on Security.)

Friday, November 04, 2005 Making Sense of Subroutines:

tile imageSubroutines are the building blocks of programs. Yet, too many programmers use them ineffectively, whether not making enough of them, naming them poorly, combining too many concepts into one, or any of a dozen other problems. Used properly, they can make your programs shorter, faster, and more maintainable. Rob Kinyon shows the benefits and advanced uses that come from revisiting the basics of subroutines in Perl.


Robert Spier: Producing Open Source Software:

I've only read the table of contents and skimmed a few chapters, but Karl Fogel (of CVS and Subversion fame) has written a "must read" book.

Producing Open Source Software - How to Run a Successful Free Software Project (read online, Buy buy from Amazon, buy from O'Reilly) is an overview of most aspects of the open source world. He covers everything from Version Control Systems to Hired Guns to Releases and Version Numbering. Karl's been doing open source for years, and has some great anecdotes to share, and he does it in a friendly and explanatory manner.

Even if you've been doing open source for years, you'll get something out of the book, even if it's just reassurance that you're not the only one who thinks that way.

Karl has been one of the driving forces behind the Subversion project. I'm consistently impressed with how Subversion is run. Decisions are well thought out, things are planned, flamewars are rare, discussions are civil. You too can have a happy project.

(Via Planet Perl.)

Schneier on Security: Oracle's Password Hashing:

Here's a paper on Oracle's password hashing algorithm. It isn't very good.

In this paper the authors examine the mechanism used in Oracle databases for protecting users' passwords. We review the algorithm used for generating password hashes, and show that the current mechanism presents a number of weaknesses, making it straightforward for an attacker with limited resources to recover a user's plaintext password from the hashed value. We also describe how to implement a password recovery tool using off-the-shelf software. We conclude by discussing some possible attack vectors and recommendations to mitigate this risk.

(Via Schneier on Security.)

Thursday, November 03, 2005

EFF: Breaking News: File-Sharing Lawsuits Fail to Deter P2P Downloaders:

RIAA v. The People: Two Years Later

Chicago - It's been two years since the Recording Industry Association of America (RIAA) started suing music fans who share songs online. Thousands of Americans have been hit by lawsuits, but both peer-to-peer (P2P) file sharing and the litigation continue unabated.

In a report released Thursday, "RIAA v. The People: Two Years Later," the Electronic Frontier Foundation (EFF) argues that the lawsuits are singling out only a select few fans for retribution, and many of them can't afford either to settle the case or defend themselves. EFF's report cites the case of a single mother in Minnesota who faces $500,000 in penalties for her daughter's alleged downloading, as well as the case of a disabled veteran who was targeted for downloading songs she already owned.

"Out of the millions of people who download music from P2P systems every day, the RIAA arbitrarily picks a few hundred to sue every month," said EFF Senior Staff Attorney Fred von Lohmann. "Many of those families suffer severe financial hardship. But despite all the publicity, studies show that P2P usage is increasing instead of decreasing."

"RIAA v. The People" was released in conjunction with the first annual P2P Litigation Summit in Chicago on Thursday, which brings together defense attorneys, clients, advocates, and academics to discuss the latest developments in the lawsuits.

Three other reports released Thursday were aimed at helping lawyers representing music fans sued by the RIAA. "Typical Claims and Counter Claims in Peer to Peer Litigation" is a general discussion of the lawsuits, while "Parental Liability for Copyright Infringement by Minor Children" and "Copyright Judgments in Personal Bankruptcy" both tackle important issues arising in defending families from devastating judgments.

"After two years of lawsuits, there's only one conclusion to draw," said von Lohmann. "Suing music fans is no answer to the P2P dilemma."

For "RIAA v. The People: Two Years Later":

For "Typical Claims and Counter Claims in Peer to Peer Litigation:

For "Parental Liability for Copyright Infringement":

For "Copyright Judgments in Personal Bankruptcy":

For more on the P2P Litigation Summit:


Cindy Cohn
Legal Director
Electronic Frontier Foundation

Fred von Lohmann
Senior Intellectual Property Attorney
Electronic Frontier Foundation

(Via EFF: Breaking News.)

Schneier on Security: The Security of RFID Passports:

My fifth column for Wired:

The State Department has done a great job addressing specific security and privacy concerns, but its lack of technical skills is hurting it. The collision-avoidance ID is just one example of where, apparently, the State Department didn't have enough of the expertise it needed to do this right.

Of course it can fix the problem, but the real issue is how many other problems like this are lurking in the details of its design? We don't know, and I doubt the State Department knows either. The only way to vet its design, and to convince us that RFID is necessary, would be to open it up to public scrutiny.

The State Department's plan to issue RFID passports by October 2006 is both precipitous and risky. It made a mistake designing this behind closed doors. There needs to be some pretty serious quality assurance and testing before deploying this system, and this includes careful security evaluations by independent security experts. Right now the State Department has no intention of doing that; it's already committed to a scheme before knowing if it even works or if it protects privacy.

My previous entries on RFID passports are here, here, and here.

(Via Schneier on Security.)

Tuesday, November 01, 2005

Connect @ EDUCAUSE:Copyright being used in the Intelligent Design vs Evolution battle:

The National Research Council and the National Science Teachers Association have moved to prevent the Kansas State Department of Education from using their documents "National Science Education Standards" and "Pathways to Science Standards." This is a reminder, once again, that "educational use" is not and has never been an excuse for copyright infringement.

Different points of view on this may be found at /. and

(Via - Technology In Academia -- Connect @ EDUCAUSE.)

Schneier on Security: NIST Hash Workshop Liveblogging (4):

This morning we heard a variety of talks about hash function design. All are esoteric and interesting, and too subtle to summarize here. Hopefully the papers will be online soon; keep checking the conference website.

Lots of interesting ideas, but no real discussion about trade-offs. But it's the trade-offs that are important. It's easy to design a good hash function, given no performance constraints. But we need to trade off performance with security. When confronted with a clever idea, like Ron Rivest's dithering trick, we need to decide if this a good use of time. The question is not whether we should use dithering. The question is whether dithering is the best thing we can do with (I'm making these numbers up) a 20% performance degradation. Is dithering better than adding 20% more rounds? This is the kind of analysis we did when designing Twofish, and it's the correct analysis here as well.

Bart Preneel pointed out the obvious: if SHA-1 had double the number of rounds, this workshop wouldn't be happening. If MD5 had double the number of rounds, that hash function would still be secure. Maybe we've just been too optimistic about how strong hash functions are.

The other thing we need to be doing is providing answers to developers. It's not enough to express concern about SHA-256, or wonder how much better the attacks on SHA-1 will become. Developers need to know what hash function to use in their designs. They need an answer today. (SHA-256 is what I tell people.) They'll need an answer in a year. They'll need an answer in four years. Maybe the answers will be the same, and maybe they'll be different. But if we don't give them answers, they'll make something up. They won't wait for us.

And while it's true that we don't have any real theory of hash functions, and it's true that anything we choose will be based partly on faith, we have no choice but to choose.

And finally, I think we need to stimulate research more. Whether it's a competition or a series of conferences, we need new ideas for design and analysis. Designs beget analyses beget designs beget analyses.... We need a whole bunch of new hash functions to beat up; that's how we'll learn to design better ones.

(Via Schneier on Security.)

Schneier on Security: Sony Secretly Installs Rootkit on Computers:

Mark Russinovich discovered a rootkit on his system. After much analysis, he discovered that the rootkit was installed as a part of the DRM software linked with a CD he bought. The package cannot be uninstalled. Even worse, the package actively cloaks itself from process listings and the file system.

At that point I knew conclusively that the rootkit and its associated files were related to the First 4 Internet DRM software Sony ships on its CDs. Not happy having underhanded and sloppily written software on my system I looked for a way to uninstall it. However, I didn’t find any reference to it in the Control Panel’s Add or Remove Programs list, nor did I find any uninstall utility or directions on the CD or on First 4 Internet’s site. I checked the EULA and saw no mention of the fact that I was agreeing to have software put on my system that I couldn't uninstall. Now I was mad.

Removing the rootkit kills Windows.

Could Sony have violated the the Computer Misuse Act in the UK? If this isn't clearly in the EULA, they have exceeded their privilege on the customer's system by installing a rootkit to hide their software.

Certainly Mark has a reasonable lawsuit against Sony in the U.S.

EDITED TO ADD: The Washington Post is covering this story.

Sony lies about their rootkit:

November 2, 2005 - This Service Pack removes the cloaking technology component that has been recently discussed in a number of articles published regarding the XCP Technology used on SONY BMG content protected CDs. This component is not malicious and does not compromise security. However to alleviate any concerns that users may have about the program posing potential security vulnerabilities, this update has been released to enable users to remove this component from their computers.

Their update does not remove the rootkit, it just gets rid of the $sys$ cloaking.

Ed Felton has a great post on the issue:

The update is more than 3.5 megabytes in size, and it appears to contain new versions of almost all the files included in the initial installation of the entire DRM system, as well as creating some new files. In short, they're not just taking away the rootkit-like function -- they're almost certainly adding things to the system as well. And once again, they're not disclosing what they're doing.

No doubt they'll ask us to just trust them. I wouldn't. The companies still assert -- falsely -- that the original rootkit-like software "does not compromise security" and "[t]here should be no concern" about it. So I wouldn't put much faith in any claim that the new update is harmless. And the companies claim to have developed "new ways of cloaking files on a hard drive". So I wouldn't derive much comfort from carefully worded assertions that they have removed "the ... component .. that has been discussed".

And you can use the rootkit to avoid World of Warcraft spyware.

World of Warcraft hackers have confirmed that the hiding capabilities of Sony BMG's content protection software can make tools made for cheating in the online world impossible to detect.

EDITED TO ADD: F-Secure maks a good point:

A member of our IT security team pointed out quite chilling thought about what might happen if record companies continue adding rootkit based copy protection into their CDs.

In order to hide from the system a rootkit must interface with the OS on very low level and in those areas theres no room for error.

It is hard enough to program something on that level, without having to worry about any other programs trying to do something with same parts of the OS.

Thus if there would be two DRM rootkits on the same system trying to hook same APIs, the results would be highly unpredictable. Or actually, a system crash is quite predictable result in such situation.

(Via Schneier on Security.)

Monday, October 31, 2005

Pushing String: XML haiku project:

Lauren has informed me (and the rest of the world) that the XML 2005 conference website had a problem accepting poster and artwork submissions. The deadline for both has been extended. If you had already filled out these forms, I’m afraid you’ll have to redo them (or, if you like, you can drop me a line if you’re just submitting artwork).

I figured I might as well take this opportunity to share a snapshot of my stitching project for this year’s conference. I already mentioned it’s sort of a sampler that has a haiku about XML parsing in it. I haven’t stitched the actual lettering yet, but you can see half of an “O” at the beginning of what I will admit is the third line of the poem. Any guesses? :-)

XML haiku project as of 28 October 2005
XML haiku project as of 28 October 2005

So far this has been a very enjoyable project. Here are some thoughts on the feel of this piece, along the lines of the analysis I did here for another project in February (which languishes unfinished, by the way).

  • Under pressure. I’ve never cranked on stitching like this! If I get really desperate I may have to keep stitching through the beginning of the conference. Lynne Price did that when she first submitted her XML with Koalas piece; each day she’d return it to its display location with more done. She’s one fast stitcher…
  • Small but pleasant selection of floss colors. I was a little bored doing the monochromatic border, with only a few tricky holes and the corners for relief. I can’t believe I completed the border first on the theory that it would be a quick way into the piece. That sucker took me about 20 hours! But it was the right decision since it gave me the interoperable framework (heh) that I could now fill in.
  • Back to 14-count fabric. Yay! I was really killing my eyes on the siapo project at 18-ct. Of course, I’ve also since found out that I had incipient presbyopia by that point, so that explains a lot. Now I’ve got modern bifocals (they call them “progressive lenses” but I rather think they’re all about regression, don’t you?) and stitching is back to being a breeze.
  • Huge canvas. I mentioned at the beginning of this project that the piece of fabric was so large that it’s hard to work with. I managed to figure out how to stitch on airplanes with it nonetheless, but it looks wrinkled right now because I work free-hand, without hoops or frames. They mar the fabric and they keep me from getting right close up to the location where I’m working. But that means I’m constantly rolling or even lightly crushing the fabric. It’ll all come out with a steam iron later (heck, that’s easier to smooth out than the original fold lines from when I got the fabric in the first place – I much prefer buying it in rolls for that reason).
  • A hybrid design formed by adapting the work of a real professional. The main pattern came from Pat Emlet at, who has some gorgeous designs, including –as noted on her home page – Oriental, Celtic, Art Nouveau, and Mackintosh styles. It was the Oriental angle that got me interested, and the pattern I selected even conveniently came with a “hole” for me to put my own wording in. Pat kindly also supplied me with a very cool Asian-style stitched font for my haiku. The trouble was, the haiku didn’t quite fit in the space, so I had to extend the border vertically by more than an inch; move the boat-against-the-sunset picture; ensure that the fabric that came with the kit I bought was big enough for the result; and transfer enough of the original design to my own online pattern to doublecheck that the whole thing hung together. This process was really really fun. (For some definition of fun.)
  • Flying blind. Even though it was all designed by computer, I had no real idea how big this project was because I didn’t have a stitch count. Pat uses Pattern Maker software to design her pieces (which I first described here as the program that amusingly uses the .xsd file extension) and she sent me the files to work with in modifying the piece. But I only got the free viewer for Pattern Maker, which doesn’t allow you see stitch counts, and of course the portion that I designed in PC Stitch would give me an incomplete count. I’m starting to be able to eyeball the size of projects now, though.
  • Process optimization. The thing that looks like a dashed line in the middle is a guideline that I’ll pull out soon. I’ve finally gotten smart about how to parcel off the work – the guideline corresponds exactly to the horizontal page boundary of my pattern printout, which runs to four pages, and to the 10x10 grid markings provided by the software (I already pulled out a vertical guide-line that I no longer need). Maybe that’s why I stitch – it gives me new ways to feel efficient!

I’m hoping that exposing my obsession with this pastime will inspire someone out there to say, “Hey, I could make something better-looking and more related to XML than that in far fewer hours, and I’m willing to come to Atlanta to show it off…”

(Via Pushing String.)