NASA ready for Mars rocket test flight Tuesday

NASA is set to launch a test flight of its new Ares I-X rocket that is designed to replace the aging space shuttle fleet and eventually spirit humans to Mars . NASA announced today that the test vehicle is slated to take off some time between 8 a.m. and noon tomorrow from Kennedy Space Center's Launch Pad 39B. The space agency noted that Ares I-X rocket is the first non-space shuttle craft to be launched from the Pad 39B since the Apollo program's Saturn rockets were retired more than 25 years ago. "For those of us who've lived with the shuttle and grew up looking at Saturn Vs, it's obviously a little different than what we're used to seeing," said Jon Cowart, one NASA's two Ares I-X deputy mission managers, in a statement. If the 1.8-million-pound, 327-foot-tall rocket doesn't launch on Tuesday, the take-off will be rescheduled for Wednesday, according to NASA. The space agency noted on its Web site that it's looking for tomorrow's flight to gauge the dependability and characteristics of the rocket's hardware, facilities and ground operations. Bad weather could stand in the way of the big test launch, though, as meterologists say that there's only a 40% change of good weather in the four-hour window. With more than 700 sensors on board, Ares I-X is wired to relay ascent data back to engineers on the ground.

NASA reported that the rocket's four first-stage, solid-fuel booster segments come from the space shuttle program. The Ares I-X combines technology from several different operations. A booster segment contains Atlas-V-based avionics, and the rocket's roll control system comes from the Peacekeeper missile. NASA's Ares rockets are expected to return humans to the moon and later take them to Mars. However, the launch abort system, simulated crew and service modules, upper stage, and various connecting structures are original. NASA has been planning on a move to the moon and then on to Mars for several years now.

With budgetary concerns in the forefront, the review is looking at possible alternatives to programs already in the pipeline. The space agency has been working toward setting up a lunar outpost by 2020. However, the schedule, if not the mission itself, has come into some question as President Barack Obama's administration oversees an independent review of NASA 's human space flight activities.

Momentum builds for open content management standard

A proposed standard meant to help content management systems communicate with each other has steady momentum, and an initial version could be finalized early next year. Organizations face difficulties when integrating information from various content repositories, because specialized connectors typically have been required for each system. Content Management Interoperability Services (CMIS) was first announced in September 2008. It outlines a standardized Web services interface for sharing content across multiple CMS (content management system) platforms.

Both customers and vendors stand to gain from CMIS. It should cut the amount of one-off integrations and custom development work end-users currently must do, and in addition, software vendors won't have to build and support a wide range of connectors, said 451 Group analyst Kathleen Reidy via e-mail. The company said Monday it has included support in the 3.2 version of its platform for CMIS 1.0, which is now in a public review period scheduled to end Dec. 22. CMIS' inclusion in Alfresco 3.2 will enable users to get a hands-on look during the review period, the company said. The specification, which is being developed under the auspices of standards body OASIS (Organization for the Advancement of Structured Information Standards), is supported by the content management industry's biggest players, including EMC, Adobe, Microsoft, Open Text, IBM and SAP. Open-source CMS vendor Alfresco is also a backer. CMIS 1.0 is on track to be finalized within the first few months of 2010, according to a recent blog post by Ethan Gur-esh, a Microsoft program manager. But even that percentage is "remarkably high" given that CMIS isn't even a standard yet, CMS Watch analyst Alan-Pelz Sharpe said in a blog post at the time. "CMIS has good momentum and has the right set of vendors backing it," the 451 Group's Reidy said. "It will take a while for the standard, once ratified, to show up in actual, commercially supported, shipping versions of most ECM products though, just due to the release cycles of these products. Despite the high-profile vendors involved, it's not clear how many end-users are aware of CMIS. A study released recently by research firm AIIM said it had "gained traction" among 15 percent of the organizations surveyed.

But it does look like it will happen, as most have stated support and have support for the current spec in developer-only downloads and so forth."

E-voting system lets voters verify their ballots are counted

A new electronic voting system being used today for the first time in a government election in the U.S. will allow voters and elections auditors in Takoma Park, Md. to go online and verify whether votes have been correctly recorded. It uses cryptographic techniques to let both voters and election auditors check whether votes have been cast and counted accurately. The voting system is called Scantegrity and was developed by independent cryptographer David Chaum, along with researchers from the University of Maryland-Baltimore, the George Washington University, MIT, the University of Ottawa and the University of Waterloo.

The Scantegrity technology is being used to augment regular optical-scan voting systems in Takoma Park's city council election. When the bubble is filled, it reveals a three-digit confirmation number already printed on the ballot using an invisible marker. To cast a vote, an individual takes a paper ballot and fills in the optical-scan oval next to the name of the selected candidate using a pen with a special type of ink. That three-digit code is a sort of randomly generated cryptographic marker that's used to associate the voter's choice with the appropriate candidate. If the code is present on the Web site, it means the ballot was counted correctly, he said.

The codes are separately randomized for each oval and for each ballot, ensuring that the codes don't reveal who an individual voted for, Chaum said in an interview with Computerworld . Voters can use that confirmation code to later log into the city's election Web site to confirm that their votes were recorded accurately. Scantegrity also lets election auditors - and even third-party observers - check whether the results were accurately tabulated without revealing how each individual vote was cast, Chaum said. Scantegrity uses cryptographic techniques to first map each code to the associated candidate and then completely conceals the link. Though it is not possible to link an individual ballot to a specific candidate, auditors can verify that the codes do lead to the recorded votes. It then uses a concept known as "zero-knowledge proof" to show auditors that the codes do in fact correspond to the right candidates, said Aleks Essex, a PhD. student in computer science at the University of Ottawa who was involved in the Scantegrity effort.

For example, an individual could use a piece of paper with a hole cut in it to prove to a child that he knows the location of Waldo in a "Where's Waldo" puzzle, Essex said. Zero-knowledge proof is a way to demonstrate the authenticity of a statement without revealing any other details about the statement, said Essex. By placing the hole over Waldo, he shows he knows Waldo's location in the puzzle, but doesn't reveal the exact location to child. The results of today's elections in Takoma Park are being audited by two officials one of whom is from Harvard University. "It is a really powerful thing to have public transparency of the tabulation process and yet preserve ballot secrecy," Chaum said. Scantegrity enables auditors to get the same sort of proof to show that confirmation codes in an election map to the right candidates, without revealing an individual voter's choice, he said.

Because Scantegrity is built on open-source software, it can be used elsewhere to run similar audits against election results using custom tools, he said. But to a large extent, optical-scan voting machines already offer a relatively high degree of verification support. Pamela Smith, President of the Verified Voting Foundation, said that technologies such as Scantegrity do add an additional layer of integrity to the election process. Because such machines save a record of the voter's intent, auditors can go back and verify results if necessary, she said. Maryland is one of the few states that rely on touch-screen voting systems, which are costlier to operate and maintain than optical scan systems, she said. The bigger issue in Maryland is that the state needs to adopt optical-scan systems on a larger scale, she said.

UC Berkeley tightens personal data security with data-masking tool

To better safeguard the personal data of its students, the University of California at Berkeley (UC Berkeley) has adopted a specialized data-masking technique in its application development work that effectively can hide data in plain sight by mixing it up. 10 of the Worst Moments in Network Security History Data such as students' first and last names can be switched around to camouflage the real names, and sensitive information such as student identification numbers also undergoes a gentle jumbling so what appears to the eye is not the true number. Steve McCabe, associate director of information in UC Berkeley's residential and student services program, says the advantage in using the dataguise tool is it significantly reduces security risks around personal, sensitive data. "Student IDs paired with names becomes restricted data here," says McCabe, describing some of the data-privacy rules that the university must follow. It's done with a tool called datamasker from dataguise.

But the challenge has been how to enforce restrictions in a software-development environment where constant work by several developers is ongoing to support UC Berkeley's home-grown Web-based applications for SQL Server, such as the housing and assignment system. Though the actual production database has to be protected through other means, the risks associated with data exposed to developers and testers in the course of their work has been vastly reduced since UC Berkeley started using the tool about half a year ago. McCabe says the data-masking approach, in which the dataguise tool mixes up names, sensitive numbers and other data prior to developers seeing it (dataguise calls it "de-identification"), has worked out well because the data columns maintain the necessary structure but the content is effectively concealed to the naked eye. "We do a lot of application development and handling large volumes of student information, and we wanted a way to restrict that data," McCabe says. "So we randomize the IDs, and first name, last name, date of birth, and so forth." While one main copy of a production database is preserved, with the genuine student information, developers can freely work on copies that have undergone the dataguise data-masking treatment in what McCabe calls a "sanitized version" without concern of a potential data breach. "It maintains the relationship and updates with scrambled data," McCabe says. UC Berkeley, like many universities, has suffered consequential data breaches. In May of this year, UC Berkeley acknowledged a data breach in which it said hackers broke into its health-services databases, compromising health-related information on about 160,000 individuals.

Cloud security service looks for malware

Webroot Tuesday announced it has extended its cloud-based Web security service, adding a way to filter outbound as well as inbound Web traffic, monitoring for threats in order to detect and block malware such as botnets that have infected computers. If the cloud-based Webroot service detects malware such as botnet code calling out to get instructions or otherwise perform an activity, it will block that request, though not all traffic on the user's machine. Five questions to ask before trusting your data to Amazon or other storage cloud provider  "We already have inbound filtering and now we're adding outbound," says Brian Czarny, vice president of solutions marketing at Webroot about the Web Security Service that can now monitor for signs of malware-infected corporate computers trying to "call home" for more instructions, a common practice among criminally run botnets.

The Webroot service would then notify the systems administrator of the security event via e-mail and the Web-based administrative console where reports can be obtained. The service works by having the corporation proxy its Web traffic through Webroot's data centers where a variety of security methods can clean malware and ward off phishing attacks. Czarny says there is no additional charge for the outbound monitoring now available through the Webroot Web Security Service, which also includes some basic URL filtering for productivity purposes. Webroot is also announcing on Tuesday an in-the-cloud e-mail archiving service that lets customers store e-mail to be searched and retrieved whether from on-site corporate mail servers or Google Apps. The pricing for the e-mail archiving is $6 per month per user for unlimited storage and retention; the Web Security Serivce costs $5 per user per month, with discounts based on volume.

Unpatched SMB bug crashes Windows 7, researcher says

A day after Microsoft plugged more than a dozen holes in its software, a security researcher unveiled a new unpatched bug in Windows 7 and Server 2008 R2 that, when exploited, locks up the system, requiring a total shutdown to regain control. Laurent Gaffie posted details of the vulnerabilities, along with proof-of-concept exploit code, to the Full Disclosure security mailing list today, as well as to his personal blog. Microsoft acknowledged that it's investigating the flaw.

The attack code, said Gaffie, crashes the kernel in Windows 7 and its server sibling, Windows Server 2008 R2, triggering an infinite loop. "No BSOD [Blue Screen of Death], you gotta pull the plug," Gaffie said in notes inserted into the exploit code . Gaffie claimed that the exploit, powered by a vulnerability in the new operating systems' implementation of SMB (Server Message Block), could be successfully launched from within a network from an already compromised computer, or used to attack Windows 7 machines via Internet Explorer (IE) by transmitting a rogue SMB packet to the PC. Unlike more serious flaws, the Windows 7 SMB bug cannot be used by attackers to hijack a PC, Gaffie confirmed. "No code execution, but a remote kernel crash," he said in an e-mail today. None of the 15 affected the final version of Windows 7, which was released to retail Oct. 22, or affected Windows Server 2008 R2. Gaffie also said that Microsoft's security team has acknowledged the vulnerability, which he first reported to them last weekend, but was told by the company that it wasn't planning to fix the flaw with a security update, instead perhaps correcting it in the first service packs for Windows 7 and Server 2008 R2. A Microsoft spokesman confirmed that the company is looking into Gaffie's claims. "Microsoft is investigating new public claims of a possible denial-of-service vulnerability in Windows Server Message Block," said the spokesman in an e-mail reply to questions. "Once we re done investigating, we will take appropriate action & [which] may include providing a security update through the monthly release process, an out-of-cycle update or additional guidance to help customers protect themselves." Gaffie's disclosure came just a day after Microsoft issued November's security updates , which patched 15 vulnerabilities in Windows, Windows Server and Office.

Microsoft pushes switchover deal for CRM Online

Microsoft is trying to steal away Salesforce.com and Oracle CRM on Demand customers with a new offer that will provide them with six months' access to its own CRM Online application at no charge if they sign a 12-month contract. That compares to $65 per month per user for Salesforce.com Professional. Microsoft charges US$44 per month per user for CRM Online Professional edition. Oracle CRM on Demand pricing starts at $70 per month per user.

Microsoft will consider expanding access to customers of other CRM products once it sees how well the program is received, Wilson said. Meanwhile, Microsoft's application is comparable from a feature standpoint and "already about 35 percent cheaper" than the competition, said Brad Wilson, general manager of Dynamics CRM. The six-month offer is valid through the end of this year. Six months is about how long it takes a customer to know for sure whether an application is right for their business, said Ray Wang, partner with the analyst firm Altimeter Group. For one thing, a customer and Oracle or Salesforce.com may have a year-to-year deal, which might still be in effect when the six-month trial period expires, Wang said. But potential hurdles lie in the way of a smooth transition over to CRM Online, he added. While contract terms may allow the customer to cancel, they may not get a refund on the year's remaining fees, according to Wang. "Hopefully you'd be [signed up] month-to-month.

Microsoft on Monday also announced price cuts for its Business Productivity Online Suite. It's good to check and see where you are in that process." Overall, however, "users win" in price wars like this, Wang said. Other SaaS (software as a service) vendors, such as NetSuite, have made a steady stream of financial enticements in recent months too, as sales slowed during the global recession. It is also planning to roll out the software worldwide in the second half of 2010, he said. Salesforce.com has also quietly lowered monthly per-user fees for its two lowest-end editions, Contact Manager and Group Edition, to $5 and $25 respectively, down from $9 and $35. Meanwhile, Microsoft is announcing the CRM switch-over deal in conjunction with an update to CRM Online, Wilson said. The service is now available in North America.

No credit card information is required to sign up, although users need to provide an e-mail address. In the new release, Microsoft made signing up for CRM Online "super-simple," he said. They can then start a free trial with either Microsoft's Outlook client or a browser-based interface, Wilson said. A series of help tools provide information on setup and maintenance. Thirty-day trials include sample data so users can begin experimenting with the system. Microsoft has also developed an improved data import wizard.

In addition, mobile access is available at no additional charge for any phone with a HTML 4.0-compliant Web browser. "We specifically tried to engineer [the application] to make it really easy for people who don't have CRM systems," Wilson said.

MySpace replaces all server hard disks with flash drives

Social networking site MySpace.com announced today that it has switched from using hard disk drives in its servers to using PCI Express (PCIe) cards loaded with solid state chips as primary storage for their data center operations. MySpace said the solid state storage uses less than 1% of the power and cooling costs that their previous hard drive-based server infrastructure had and that they were able to remove all of their server racks because the ioDrives are embedded directly into even its smallest servers. "We looked at a number of solid state solutions, using many different kinds of RAID configurations, but we felt that Fusion-io's solution was exactly what we needed to accomplish our goals," Buckingham stated. The PCIe cards, from Fusion-io Inc., have allowed MySpace to replace multiple server farms made up of 2U (3.5-in high) servers that had used 10 to 12 15,000 RPM Fibre Channel drives each with 1U (1.75-in high) servers using a single ioDrive . "In the last 20 years, disk storage hasn't kept pace with other innovations in IT, and right now we're on the cusp of a dramatic change with flash technologies," said Richard Buckingham, vice president of technical operations for MySpace, in a statement.

MySpace's new servers also have replaced its high-performance hosts that held data in large RAM cache modules, a costly method MySpace had been using in order to achieve the necessary throughput to serve its relational databases. Salt Lake City-based Fusion-io claims the ioDrive Duo offers users unprecedented single server performance levels with 1.5GB/sec. throughput and almost 200,000 IOPS. The system can reach such performance levels because four ioDrive Duos in a single server can scale linearly, which provides up to 6GB/sec. of read bandwidth and more than 500,000 read IOPS. The cards come in 160GB, 320GB and 640GB capacities. MySpace said its new servers using the NAND flash memory modules give it the same performance as its older RAM servers. A 1.28TB card is expected in the second half of this year. "Social networking sites and other Web 2.0 applications are very database dependent. Ethernet pipe," David Flynn, CTO of Fusion-io, said in an interview. Our 320GB ioDrive can fill a 10Gbit/sec.

IA job prospects bright

No one reading this column needs general references to news about the economic difficulties we are living through in the United States and elsewhere. He's looking for a permanent job. Just the other day, I spoke with a long-time friend and colleagues from the information security field who used to earn a decent living as a much sought-after consultant; last week he canceled his business telephone line to save money.

High-tech talent set to take off Another colleague of ours hasn't had a consulting contract in months – despite having had trouble in the past keeping up with demand for his services. The situation makes me think more positively about having moved from the business world to academic in 2001 – despite dropping my nominal salaried income by 57.5% at that time and now earning about one-third of what I'd be making as a senior IA executive in industry today. I think that security consultants may be suffering from a side-effect of the economic downturn: clients who don't already have or want permanent information assurance (IA) personnel may simply have decided to continue taking risks and hoping that nothing bad will happen to them. At least I have tenure, which means that I'm not going to be fired unless I appear in class out of uniform (Vermont Militia = US Army Class A greens), show up drunk (I never drink alcohol), treat a student rudely (no way) or recite Monty Python skits in class… uh wait a minute, I do recite Monty Python skits in class – but very briefly. Only little bits of them.

Really. Honest. Perhaps organizations who have enough savvy to employ permanent IA staff also understand the value of hiring good people for these critically important functions. But more seriously, there is good news for IA students and professionals: according to an extensive survey published by Foote Partners, LLC in Florida, job prospects are good for information assurance (IA) specialists. Upasana Gupta of BankInfoSecurity reviews the "2009 IT Skills Trends Report Update" which is available free in return for buying any other report from Foote or simply for registering with them. Interestingly, the skills most frequently sought-after by employers include (quoting Gupta directly): • Forensic Analysis• Incident Handling & Analysis• Security Architecture• Ethical Hacking• Network Security• Security Management Professor Gene Spafford said in his acceptance address for the National Computer System Security Award in 2000 that we were "eating our seed corn" by paying IA professors less than our IA graduates earn on their first job.

Gupta quotes the company as describing a number of factors (described in more detail in her excellent article) increasing demand for IA professionals: • IA is increasing recognized as strategically significant to all aspects of business.• Customers are demanding better security to protect their own information.• Laws and regulations are pressuring organizations into compliance with better security.• Liability costs for non-compliance are rising.• Virtualization is increasingly making technologists aware of security issues. The Foote report shows average salaries for various IA positions ranging from $70,000 to $170,000. How we are to attract professionals and recent graduates to our field of teaching and research in universities is a mystery to me. Universities will usually be willing to provide publicity for donors, so it's not a one-way donation devoid of short-term value for the donors, either. Some years ago I begged industry to think ahead and start funding supplements to professors' salaries so university IA departments can compete with industry in attracting field-experienced, professionally certified experts with advanced degrees to our faculty. Anyone interested in raising my salary – oops, our salaries – at Norwich University is welcome to contact me directly and I'll put you in touch with our Chair of Computing to make the arrangements. We even teach courses for free and do work on courses during the summers, when we are not paid for our time!

In the long run, without support from industry to raise salaries, the only people who are going to be willing to work long hours in universities for pathetic salaries are nut-cases like my colleagues and me who work on courses and research because we are addicted to teaching. WE ARE ADDICTS. But I can stop any time. Really.

Acorn 2.1 gains AppleScript, more

It seems like only last month that Flying Meat released Acorn 2, its exceptional "image editor for humans," with a massive array of new features like mutli-layer screenshots, RAW support, and two heaping handfuls of other new tools. After a couple of minor touch-ups and fixes in recent weeks, the purveyor of virtual airborne nourishment is back with Acorn 2.1, a major update that adds another laundry list of new features and fixes. Oh wait, it was only last month. Acorn 2.1's most significant new feature is definitely "scripting for humans" in the form of AppleScript, complete with a series of example scripts to get users started.

Adding AppleScript support to an application can be hard, which inspired Flying Meat to integrate the JSTalk scripting language for Acorn 2.0's launch. AppleScript is a fairly simple scripting language that is accessible to mere mortals (read: non-developers) like you and me, but there has been some understandable debate recently about its future. JSTalk is based on Javascript-it arguably jives better with developers' style and can be easier to add to Mac OS X apps. Other new features include a Hex color picker in the color palette (great for Web design), various improvements to managing layers, automatic image scaling when printing, and the adoption of a smart new Mac trend wherein Acorn will ask if you want to move it to the Applications folder if you run it from any other location. Nevertheless, the community asked for AppleScript, and it's great to see Flying Meat swoop in to the rescue.

I wasn't kidding about there being a laundry list of improvements and fixes in Acorn 2.1, so take a look at the rest for yourself, or fire up Acorn to take the update out for a spin. But before you resort to drastic measures, you could just download a demo for free. If, for some strange reason, you still have not tried or bought a copy of Acorn yet, you may need to consult your physician. Acorn 2 requires 10.6 Snow Leopard and a license costs $50.

Patch Tuesday: What the experts say

Microsoft Tuesday released six patches that address 15 vulnerabilities. Windows exploit code coming "There are three vulnerabilities this month that target a listening service. Here's a look at what security experts are saying about the vulnerabilities, patches and what should concern users.

While none of them are likely to considered great candidates for exploit, they are worth noting as they all primarily affect the enterprise. While Web Services on Devices affects Vista and Server 2008, the attack vector requires that you be on the local subnet, meaning the home user is unlikely to see any real risk."- Tyler Reguly, senior security engineer for nCircle "MS09-066 affects corporate networks as it addresses a vulnerability in Active Directory. It is unlikely that the home user will be running a license logging server or have Active Directory up and running. A successful exploit can result in denial-of-service on the system. All operating systems other than Windows 2000 require valid credentials to send a specially crafted packet.

This vulnerability will be difficult to exploit though. If an attacker already had valid credentials, they would do more damage than a denial-of-service attack on a server. A specially crafted packet sent to a Windows 2000 machine can result in an unresponsive machine that requires an unscheduled reboot."- Jason Miller, data and security team leader for Shavlik Technologies "The Embedded OpenType font kernel vulnerability [MS09-065] is the most serious in our opinion. For Windows 2000 servers, like MS09-064, these machines should be patched immediately. Not only is proof-of-concept exploit code publicly available, but all that's required of a user to become infected by it is simply viewing a compromised Web page. Symantec isn't seeing any active exploits of this in the wild yet, but we think attackers will be paying a lot of attention to it in the future."- Ben Greenbaum, senior research manager at Symantec Security Response. "One of the nice things that you will see today is that Windows 7 and Windows Server 2008 are not affected by any of these patches."- Richie Lai, director of vulnerability research for Qualys Follow John on Twitter: http://twitter.com/johnfontana

Hacker leaks thousands of Hotmail passwords, says site

More than 10,000 usernames and passwords for Windows Live Hotmail accounts were leaked online late last week, according to a report by Neowin.net , which claimed that they were posted by an anonymous user on pastebin.com last Thursday. Neowin reported that it had seen part of the list. "Neowin has seen part of the list posted and can confirm the accounts are genuine and most appear to be based in Europe," said the site. "The list details over 10,000 accounts starting from A through to B, suggesting there could be additional lists." Hotmail usernames and passwords are often used for more than logging into Microsoft 's online e-mail service, however. The post has since been taken down.

Many people log onto a wide range of Microsoft's online properties - including the trial version of the company's Web-based Office applications , the Connect beta test site and the Skydrive online storage service - with their Hotmail passwords. Accounts with domains of @hotmail.com, @msn.com and @live.com were included in the list. It was unknown how the usernames and passwords were obtained, but Neowin speculated that they were the result of either a hack of Hotmail or a massive phishing attack that had tricked users into divulging their log-on information. Microsoft representatives in the U.S. were not immediately able to confirm Neowin's account, or answer questions, including how the usernames and passwords were acquired. Last year, a Tennessee college student was accused of breaking into former Alaska governor Sarah Palin's Yahoo Mail account in the run-up to the U.S. presidential election. The BBC , however, reported early Monday that Microsoft U.K. is aware of the report that account information had been available on the Web, and said it's "actively investigating the situation and will take appropriate steps as rapidly as possible." If Neowin's account is accurate, the Hotmail hack or phishing attack would be one of the largest suffered by a Web-based e-mail service.

Palin, the Republican vice presidential nominee at the time, lost control of her personal account when someone identified only as "rubico" reset her password after guessing answers to several security questions. Kernell's case is ongoing. David Kernell was charged with a single count of accessing a computer without authorization by a federal grand jury last October. Shortly after the Palin account hijack, Computerworld confirmed that the automated password-reset mechanisms used by Hotmail, Yahoo Mail and Google 's Gmail could be abused by anyone who knew an account's username and could answer a single security question.

The Internet’s First 40 Years: Top Ten Milestones

While 40 years in a person's lifetime is a very long time, the Internet - which turned 40 today - is really only getting started. No birthday celebration for the Internet would be complete without giving recognition to some of the biggest milestones. Still, like just about any 40-year-old guy, the Internet has packed a lot of changes into its life so far. Deciding on which ones is a totally tough call, because the Internet has made such a huge impact on anyone lucky enough to access it.

So here, in chronological order, is my rather arbitrary list of Top Ten Internet Milestones, gleaned largely from a nostalgic look back through the pages of PC World. But as I view things, anyway, it's important to pay tribute to the myriad technologies created over the past four decades to connect people to the Internet - first through modems and then through wireless and cable - as well as to let them access communications like data, radio, and TV in ways once unimaginable. October 29, 1969. Leonard Kleinrock, a UCLA college professor, sends a two-letter message - "lo" - to a computer at Stanford Research Institute. October 13, 1994 - The - eventually to be known as Netscape Navigator - is released as beta code. The Internet is born.

November 6, 1997 - Intel ships a videoconferencing system that runs on the Internet (gasp!) as well as on ISDN phone lines (remember them?) and corporate LANs. February 18, 1998 - The first V.90 modems, enabling Internet access at the then-whopping rate of 56 Kbps, are shipped to stores by 3Com Corp. August 21, 2002 - Together with T-Mobile and HP, Starbucks expands WiFi access to users at 1200 coffee shops throughout the US . Early January, 2009 - Yahoo shows off Connected TV, a platform allowing Web widgets to dock on Internet-connected HDTVs at the Consumer Electronics Show in Las Vegas. Sometime in September 1999 - An Internet-enabled game machine named Dreamcast debuts, pioneering a pathway that will eventually lead to Nintendo's GameCube and Sony's PS3. June 28, 2000 - Metricom rolls out the then-blazingly fast, 128Kbps Ricochet wireless service in Atlanta and San Diego. Early July, 2009 - Internet radio services like Pandora, Blip.fm and Last.fm are saved - albeit temporarily - when recording companies agree to make royalty fees more comparable to those paid by satellite TV services, for example. October 22, 2009 - Microsoft's Internet TV, a new service for accessing Web-based streaming TV shows and movies from directly inside Media Center - finally leaves beta as part of the launch of Windows 7.

EMC executive takes over at storage vendor Xiotech

EMC executive Alan Atkinson is taking over as CEO of Xiotech, a storage company that just secured $10 million in new financing. Glassmeyer is also general partner of Oak Investment Partners, which owns a majority stake in Xiotech. Nine data storage companies to watch Atkinson was co-founder and CEO of WysDM, a data protection vendor sold to EMC in April 2008. Atkinson remained at EMC as vice president of the company's Storage Software Group, but on Thursday was announced as Xiotech's new CEO. Xiotech said its previous CEO, Casey Powell, will remain on the board of directors and will be a "strategic advisor to Atkinson." "With his extensive knowledge of and experience with data storage, Alan Atkinson is the right leader to take Xiotech to the next level," Ed Glassmeyer of Xiotech's board of directors said in an announcement. Atkinson's 21-year career includes positions at StorageNetworks, Goldman Sachs and AT&T Bell Laboratories.

Xiotech, based in Eden Prairie, Minn., plans to use the cash to expand its Intelligent Storage Element technology with new products to be released early next year. He takes over at Xiotech just after the company announced a $10 million funding round from private investors. Xiotech says its ISE architecture is designed to provide 100% usable storage capacity, to improve efficiency but without a performance hit. Atkinson marked his first day on the job at Xiotech with a blog post. "I can honestly say, after 20+ years in the storage industry (I'm really not THAT old), I've never seen a company this size with so many talented storage folks," he wrote. "We have more patents than most companies five times our size." Follow Jon Brodkin on Twitter

Programmer slip-up produces critical bug, Microsoft admits

Microsoft acknowledged Thursday that one of the critical network vulnerabilities it patched earlier in the week was due to a programming error on its part. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights," read the MS09-050 security bulletin released Tuesday. The flaw, one of 34 patched Tuesday in a massive security update , was in the code for SMB 2 (Server Message Block 2), a Microsoft-made network file- and print-sharing protocol that ships with Windows Vista, Windows 7 and Windows Server 2008. "Look at the two array references to ValidateRoutines[] near the end," said Michael Howard, principal security program manager in Microsoft's security engineering and communications group, referring to a code snippet he showed in a post to the Security Development Lifecycle (SDL) blog. "The array index to both is the wrong variable: pHeader->Command should be pWI->Command." Howard, who is probably best known for co-authoring Writing Secure Code , went on to say that the error was not only in new code, but a "bug of concern." The incorrect variable - "pHeader" instead of "pWI" - produced a vulnerability that Microsoft rated critical, its highest threat ranking. "An attacker who successfully exploited this vulnerability could take complete control of an affected system. Attackers could trigger the bug by sending a rigged SMB packet to an unpatched PC. As he did in July when he admitted an extra "&" character in a Microsoft code library created a widespread vulnerability in most company software - and software crafted by third-party developers such as Sun, Cisco and Adobe - Howard argued that the SMB 2 mistake was virtually impossible to catch without a line-by-line review. "There is only one current SDL requirement or recommendation that could potentially find this, and that is fuzz testing," said Howard. "The only other method that could find this kind of bug is very slow and painstaking code review.

Humans are fallible, after all." Fuzzing - subjecting software to a wide range of data input to see if, and where, it breaks - did uncover the bug "very late in the Windows 7 development process," Howard said. This code was peer-reviewed prior to check-in into Windows Vista; but the bug was missed. Although the preview versions of Windows 7 that Microsoft handed out to the public - both the beta from January 2009 and the release candidate posted in May - included the bug, Microsoft caught it in time to patch the RTM, or release to manufacturing, final code that will officially ship next Thursday. That vulnerability, which received attention because exploit code went public , also affected Windows 7 prior to the RTM build. The SMB 2 bug in question was not the one that Microsoft publicized last month in a security advisory.

Howard also said that he thought Microsoft's SDL process has handled the "low-hanging bugs" in the company's code, leaving what he called "one-off bugs" that are difficult to detect using automated tools. "The majority of the bugs I see in Windows are one-off bugs that can't be found easily through static analysis or education, which leaves only manual code review, and for some bug classes, fuzz testing," he said. "But fuzz testing is hardly perfect." Most analysts this week urged Windows users to put the MS09-050 patches on a high-priority list, if only because exploit code for one of the three SMB 2 vulnerabilities was public knowledge. Microsoft echoed that in its monthly deployment recommendations . This month's security updates, including MS09-050, can be downloaded and installed via the Microsoft Update and Windows Update services, as well as through Windows Server Update Services.

Botnet production eerily like commercial code practice

Botnets are elaborate command-and-control systems used by criminals for sending spam, stealing personal information or launching denial-of-service attacks through hijacked computers. People don't understand why their machines are infected as they've beenrunning antivirus continuously," says Gunter Ollmann, vice president of research at Damballa, a security start-up specializing in botnet detection. "They're stumped." The answer, he says, is that botnet code designed to infect computers typically makes use of evasion techniques such as "noise insertion" and "chaffing," generating redundant strings of code that does nothing but make it harder for antivirus or other detection methods to find it, because it "will stop a string-inspection system from seeing them," says Ollmann, who has 20 years experience in the malware-analysis arena, including as chief security researcher at IBM. Botnet code is often hidden using "crypters," specialized tools such as the "God of War Crypter," to hide malware through encryption. But their underlying malware code structures share common ways to evade detection, and even mimic some commercial code practices, such as digital methods to prevent copying and reverse engineering, says one researcher.

These are all just components that could be used in a botnet. One protector popular with cybercriminals is Themida, a tool from Oreans Technologies, mainly used in gaming software to prevent reverse engineering. "Most of the hacker sites will contain PDF guides on how to use these," Ollmann says. "Botmasters have built up almost a production line of systems."  Do-it-yourself (DIY) malware construction kits are sometimes offered free as source code, though binary fully featured DIY kits carry a payment charge. "By offering the free version of the source code, they're showing there's something new and establish their credentials,"  Ollmann says. "Forums get very interesting. And over the past year or so, botnet fabrication has turned to "protectors" to prevent anyone from using debugging and analysis techniques to reverse engineer botnet code, Ollmann says. It's like watching a kid's show, with competitors pirating each other tools, very scrappy." It's a fast-paced code development environment, and if botnet code has been out for more than about three months, "you can probably pick it up for free because it's been pirated," Ollmann says. One of the more troubling aspects of all this, Ollmann says, revolves around sites in The Netherlands for trading and selling malware code where it's evident that a number of the participants don't appear to be professional cybercriminals but simply misguided young people who "think security is cool fun" and want to build up a reputation by demonstrating they can develop malware and attack tools. The country-specific sites are international in scope, most use English as the shared language, but some are in Russian, too.

In most countries, development and dissemination of malware tools isn't illegal, expect perhaps in France, which is known to have some of the strictest laws in this regard, Ollmann says. Their particular talent is "they're very well-organized in how to hide and how to move about." But when it comes to making use of these tools to construct botnets, it appears the professional criminals that go against the enterprise with botnets "aren't necessarily more advanced" than anyone else and "it's clear they haven't developed the tools themselves," Ollmann contends.

How to stop IT managers from going rogue

Research shows that nearly half of all data breaches come from inside an organization, sometimes by those trusted to protect sensitive corporate or customer data, which is why industry watchers say enterprise IT departments need to invest in technology that ensures no one person has all the power. Often they can simply log in as administrator and it can be difficult to monitor who actually made what change and when," says Andras Cser, senior analyst with Forrester Research. "There are a lot of http://www.networkworld.com/news/2008/071608-insider-threat.html ">insider threats today and many organizations have access policies that violate best practices." Companies like e-DMZ, Cyber-Ark, Cloakware, Lieberman Software and BeyondTrust attempt to address that need. Entitlement management: Access control on steroids "The problem with large organizations is that IT people often have access to production and other sensitive passwords.

Symark acquired BeyondTrust and took on its name in September. This week BeyondTrust released an updated version of its IT administrator password management software. The combined company focuses on technology to manage administrator access to Unix and Windows systems. PowerKeeper 4.0 falls in the category of privileged account management software, Cser says, adding that preventing disgruntled IT managers from wreaking havoc is one reason to purchase such a product, another is to keep compliant with regulatory standards. "This is a good product for managing password vaults and performing fine-grained privileged access management for Unix systems, and now Windows systems," Cser says. The appliance uses automated password resets and management workflows to ensure that privileged accounts cannot be accessed in inappropriate ways. PowerKeeper 4.0 is an appliance, available in physical or virtual form factors, that installs in a customer environment inside the firewall with access to the systems it will manage within the data center.

This version works with intelligent adapters to any operating system, database or device using SSH and Telnet, communicating with the devices and providing coverage for all systems in heterogeneous environments, the company says. "The administrator that sets the policies can't also be the person in charge of monitoring access in our system," says Saurabh Bhatnagar, vice president of product management for BeyondTrust. "It complies with security and compliance regulations that require a segregation of duties and deals with regulating access to shared accounts so everyone isn't logging in as the same admin." PowerKeeper is part of the company's suite of privileged access lifecycle management products that addresses access, control, monitoring and remediation capabilities when managing passwords and access to IT environments. PowerKeeper 4.0 is now available as part of BeyondTrust's PowerSeries Early Adopter Program. This version also automatically discovers and brings under management computers found in Active Directory, which the company says helps provide more coverage more efficiently by using automation. "Security, compliance and management efficiencies are the three main drivers for customers," Bhatnagar adds, saying that typically security managers or chief compliance officers would be the target customer. The PowerKeeper appliance or virtual machine costs $25,000, which includes enough licenses to manage 100 systems and an unlimited number of users. Follow Denise Dubie on Twitter Do you Tweet?

Security researchers ask: Does self-destructing data really vanish?

Researchers this week published a paper describing how they broke Vanish, a secure communications system prototype out of the University of Washington that generated lots of buzz when introduced over the summer for its ability to make data self-destruct. But interesting wasn't good enough for researchers at Princeton University, the University of Texas and the University of Michigan, who wondered how well the system could really stand up to attack. I gave the system a whirl back in July and found it to be pretty interesting. Ed Felten from Princeton describes in the Freedom to Tinker blog how he, a fellow researcher at Princeton and peers at the University of Michigan and University of Texas figured out how to beat Vanish.

Such networks, the same kinds used to share music and other files, change over time as computers jump on or off. Their paper is titled "Defeating Vanish with Low-Cost Sybil Attacks Against Large DHTs."  Vanish exploits the churn on peer-to-peer networks by creating a key whenever a Vanish user puts the system to use and then divvying up that key and spreading across the P2P net. As such, portions of the key disappear forever and the original message can't be unencrypted. This led to some interesting technical discussions with the Vanish team about technical details of Vuze and Vanish, and about some alternative designs for Vuze and Vanish that might better resist attacks." Later, Felten ran into an ex-student now at the University of Texas who happened to be investigating Vanish as well, and they wound up collaborating. "The people who designed Vanish are smart and experienced, but they obviously made some kind of mistake in their original work that led them to believe that Vanish was secure - a belief that we now know is incorrect," Felten writes. Felten wrote that after reading about Vanish during the summer "I realized that some of our past thinking about how to extract information from large distributed data structures might be applied to attack Vanish. [S]tudent Scott Wolchok grabbed the project and started doing experiments to see how much information could be extracted from the Vuze DHT [Vuze is the P2P network used by Vanish and DHT is a distributed hash table]. If we could monitor Vuze and continuously record almost all of its contents, then we could build a Wayback Machine for Vuze that would let us decrypt [vanishing data objects] that were supposedly expired, thereby defeating Vanish's security guarantees." Felten goes on to tell an interesting tale about the timing of this realization and the experiments that followed. "We didn't want to ambush the Vanish authors with our break, so we took them aside at the [Usenix Security conference in Montreal in August] and told them about our preliminary results.

The University of Washington researchers investigated the other researchers' findings, updated Vanish and issued a report of their own on the experience.  Among other things, they came up with a way to make breaking Vanish more expensive, Felten writes. We do encourage researchers, however, to analyze it and improve upon it. The University of Washington researchers sum up their latest findings here as well, noting that Vanish does not have to be wedded to Vuze and in fact might be better based on a hybrid system that uses multiple distributed storage systems.  They write: "However, we recommend that at this time, the Vanish prototype only be used for experimental purposes. We strongly believe that realizing Vanish's vision would represent a significant step toward achieving privacy in today's unforgetful age." For more on network research, read our Alpha Doggs Blog. Follow Bob Brown on Twitter.

Intel CTO: Machines could ultimately match human intelligence

Will machines ever be as smart as humans? The notion of a technological "singularity," a time when machines match and surpass human intellect, has been popularized by thinkers such as inventor and author Raymond Kurzweil, who commonly cites Moore's Law in his arguments about the exponential growth of technology. Intel CTO Justin Rattner thinks that someday, they might.

Rattner's views on the singularity are sought after, given that he is CTO of the world's biggest chipmaker and the head of Intel Labs, the company's primary research arm. So yeah, at some point, assuming all kinds of advances and breakthroughs, it's not inconceivable we'll reach a point that machines do match human intelligence." Already, scientists are working on placing neural sensors and chips into the brain, allowing people to control prosthetic limbs with their own thoughts. In a recent interview with Network World, Rattner said he has "tried to sidestep the question of when [the singularity] might occur," but says machine intelligence is constantly increasing due to laws of accelerating returns, "of which Moore's Law is perhaps the best example." "There will be a surprising amount of machines that do exhibit human-like capabilities," Rattner said. "Not to the extent of what humans can do today, but in an increasing number of areas these machines will show more and more human-like intelligence, particularly in the perceptual tasks. This is likely to become a "relatively routine procedure" in a few years, Rattner said. Rattner's views are also held in high regard in the world of supercomputing, of course, and he will deliver the opening address at the SC supercomputing conference in Portland, Ore. in November.

Rattner said that while many commentators are preoccupied with the far-off singularity, he concerns himself more on how laws of accelerating returns "are real" and could lead to amazing advances in technology, including augmentation of the human body. "Assuming that interface technology progresses in an accelerating way, the possibilities of augmenting human intelligence with machine intelligence become increasingly real and more diverse," Rattner said. Nearly 80% of the world's 500 fastest supercomputers use Intel processors. But Rattner says the supercomputing industry is already looking forward to the era of the exaflop - 1,000 times faster than a petaflop. The world's first petaflop machines, capable of performing one thousand trillion calculations per second, came online just last year. Rattner says the fundamental technologies behind a future exaflop machine could be demonstrated by the middle of next decade, and - depending on government investment - the first exaflop machines could become operational in the second half of the decade.

You'd need a 500-megawatt nuclear power station to run the thing." The industry will have to move that number down to something practical, perhaps tens of megawatts, Rattner said. But this still depends on overcoming limitations in today's computing architectures. "Now that we've achieved petascale computing, there's all this interest in getting the next factor of 1,000," Rattner said. "But we can't get there with today's technology, largely because of power considerations. But the work is just getting started. "We've got a lot of really big engineering challenges," Rattner said. "Today, we just don't know how to get there."

iStockphoto guarantees its collection

Starting today, iStockphoto, the micropayment royalty-free image, video, and audio provider, will legally guarantee its entire collection from copyright, moral right, trademark, intellectual property, and rights of privacy disputes for up to $10,000. The new iStock Legal Guarantee, delivered at no cost to customers, covers the company's entire 5 million-plus collection. Recently however, Vivozoom, another microstock company, took a similar action to guarantee its collection. Additional coverage for an Extended Legal Guarantee totaling $250,000 is available for the purchase of 100 iStock credits. "Our first line of defense has always been-and continues to be-our rigorous inspection process," said Kelly Thompson, chief operating officer of iStockphoto. "The Legal Guarantee is simply an added layer of protection for our customers, many of whom are using microstock more than ever before." Although common for traditional stock houses, such legal guarantees have not been standard in microstock because of the low prices. iStock says that files purchased and used in accordance with its license will not breach any trademark, copyright, or other intellectual property rights or rights of privacy.

And, if a customer does get a claim, iStock will cover the customer's legal costs and direct damages up to a combined total of $10,000. iStock customers can increase their coverage for legal fees and direct damages up to a combined total of $250,000 by purchasing the Extended Legal Guarantee via the iStock credits (which costs between $95 and $138). iStock expects that this program will be popular with a very small percentage of sophisticated media buyers with very specific needs, and considers it to be a value-added service to customers rather than a major source of revenue.

SANS: Security Ignores the Two Biggest Cyber Risks

Two major cyber risks dwarf all others, but organizations are failing to invest in the proper tools to mitigate them, choosing instead to focus security attention on lower risk areas, according to a report released Tuesday by SANS Institute. Attack data for this research was drawn from TippingPoint appliances deployed at customer sites, while vulnerability data was collected via Qualys' scanning services. The research, which draws upon data collected from March to August 2009 from thousands of organizations, claims companies give insufficient attention to today's risks and put their systems in peril by continuing to maintain the status quo with an emphasis on operating system patches and other outdated protection methods. Also see 7 Reasons Websites Are No Longer Safe The most surprising conclusion may be that client-side application software vulnerabilities pose the largest threat to network security as opposed operating system vulnerabilities, which tend to get more attention when it comes to patching.

The report notes that most large organizations take at least twice as long to patch client-side vulnerabilities as they take to patch operating system vulnerabilities, choosing to place a higher priority on the lesser risk. SANS claims many spear-phishing attacks exploit vulnerabilities in commonly-used programs such as Adobe PDF Reader, QuickTime, Adobe Flash and Microsoft Office. "This is currently the primary initial infection vector used to compromise computers that have Internet access," the report states. In addition to unpatched client applications, SANS said the other priority for IT security now should be attention to web application vulnerabilities. The two risks, and their tendency to be low priority for security, create a perfect storm for infection. Web applications constitute more than 60 percent of the total attack attempts observed on the Internet, according to the report. "These vulnerabilities are being exploited widely to convert trusted web sites into malicious web sites serving content that contains client-side exploits," the report states. "Web application vulnerabilities such as SQL injection and Cross-Site Scripting flaws in open-source as well as custom-built applications account for more than 80 percent of the vulnerabilities being discovered." Despite the enormous number of attacks, and despite widespread publicity about these vulnerabilities, most web site owners fail to scan effectively for the common flaws and become unwitting tools used by criminals to infect the visitors that trusted those sites to provide a safe web experience, said SANS researchers. With so many Internet-facing web sites vulnerable, and so many applications that contain bugs, it makes it easy for attackers to take advantage of unsuspecting web browsers.

The victims' infected computers are then used to propagate the infection and compromise other internal computers and sensitive servers incorrectly thought to be protected from unauthorized access by external entities. When users visit a trusted site, they feel safe downloading documents, or simply opening documents, music or video which exploit client-side vulnerabilities. "Some exploits do not even require the user to open documents," the report states. "Simply accessing an infected web site is all that is needed to compromise the client software. In many cases, the ultimate goal of the attacker is to steal data from the target organizations and also to install back doors through which the attackers can return for further exploitation." Also see Drive-By Spyware The report's other conclusions include data that finds operating systems continue to have fewer remotely-exploitable vulnerabilities that lead to massive Internet worms. However, the number of attacks against buffer overflow vulnerabilities in Windows tripled from May-June to July-August and constituted over 90 percent of attacks seen against the Windows operating system. Other than Conficker/Downadup, no new major worms for OSs were seen in the wild during the reporting period, the report said. The research also finds rising numbers of zero-day vulnerabilities. "World-wide there has been a significant increase over the past three years in the number of people discovering zero-day vulnerabilities, as measured by multiple independent teams discovering the same vulnerabilities at different times.

Some vulnerabilities have remained unpatched for as long as two years."

Linux driver chief calls out Microsoft over code submission

After a kick in the pants from the leader of the Linux driver project, Microsoft has resumed work on its historic driver code submission to the Linux kernel and avoided having the code pulled from the open source operating system. The submission was greeted with astonishment in July when Microsoft made the announcement, which included releasing the code under a GPLv2 license Microsoft had criticized in the past. Microsoft's submission includes 20,000 lines of code that once added to the Linux kernel will provide the hooks for any distribution of Linux to run on Windows Server 2008 and its Hyper-V hypervisor technology. Greg Kroah-Hartman, the Linux driver project lead who accepted the code from Microsoft in July, Wednesday called out Microsoft on the linux-kernel and driver-devel mailing lists saying the company was not actively developing its hv drivers.

If they do not show back up to claim this driver soon, it will be removed in the 2.6.33 [kernel] release. HV refers to Microsoft Hyper-V. He also posted the message to his blog. "Unfortunately the Microsoft developers seem to have disappeared, and no one is answering my emails. So sad...," he wrote. They are not the only company." Also new: Microsoft forms, funds open source foundation Kroah-Hartman said calling out specific projects on the mailing list is a technique he uses all the time to jump start those that are falling behind. Thursday, however, in an interview with Network World, Kroah-Hartman said Microsoft got the message. "They have responded since I posted," he said, and Microsoft is now back at work on the code they pledged to maintain. "This is a normal part of the development process. In all, Kroah-Hartman specifically mentioned 25 driver projects that were not being actively developed and faced being dropped from the main kernel release 2.6.33, which is due in March.

On top of chiding Microsoft for not keeping up with code development, Kroah-Hartman took the company to task for the state of its original code submission. "Over 200 patches make up the massive cleanup effort needed to just get this code into a semi-sane kernel coding style (someone owes me a big bottle of rum for that work!)," he wrote. He said the driver project was not a "dumping ground for dead code." However, the nearly 40 projects Kroah-Hartman detailed in his mailing list submission, including the Microsoft drivers, will all be included in the 2.6.32 main kernel release slated for December. Kroah-Hartman says there are coding style guidelines and that Microsoft's code did not match those. "That's normal and not a big deal. But the large number of patches did turn out to be quite a bit of work, he noted. It happens with a lot of companies," he said.

He said Thursday that Microsoft still has not contributed any patches around the drivers. "They say they are going to contribute, but all they have submitted is changes to update the to-do list." Kroah-Hartman says he has seen this all before and seemed to chalk it up to the ebbs and flows of the development process. The submission was greeted with astonishment in July when Microsoft made the announcement, which included releasing the code under a GPLv2 license Microsoft had criticized in the past. Microsoft's submission includes 20,000 lines of code that once added to the Linux kernel will provide the hooks for any distribution of Linux to run on Windows Server 2008 and its Hyper-V hypervisor technology. Follow John on Twitter

Patch Tuesday: What the experts are saying

Windows was hit hard on Microsoft's Patch Tuesday with eight of nine patches addressing issues in all the shipping versions of both the OS client and server.

Related story: Microsoft, Apple, Mozilla patches put heavy load on IT

The lone non-Windows patch fixes holes in Office, Visual Studio, ISA Server and BizTalk Server.

The nine bulletins in total address 19 vulnerabilities, of which 15 are critical. Here's what experts are saying about the flood of patches:

"Many people are going to be looking at the WINS (039) anonymous remote code execution attack as a potential worm vector, but they shouldn't minimize the IIS denial of service attack or Bulletin 038. These vulnerabilities mean that anyone could become infected simply by opening a movie file. Who doesn't use the Internet these days to watch videos? This month had the potential to be the month of ATL bug fixes, but it has turned out to be more of a smorgasbord. These updates are going to require lots of IT resources for testing and deployment." - Andrew Storms, director of security operations, nCircle

"There's no break from patching this summer. Microsoft is playing catch up with these patches as cybercriminals have already used some of the serious vulnerabilities to commandeer vulnerable Windows computers." - Dave Marcus, director of security research and communications, McAfee Avert Labs

"All of the ActiveX issues patched this month could be easily exploited and can impact even the average computer user. For example, any user who has Microsoft Office on their machine could be vulnerable to the Microsoft Office Web Components vulnerabilities. Similarly, every user with Windows XP SP3 or Vista could also be susceptible to one of the Remote Desktop Connection issues." - Ben Greenbaum, senior research manager, Symantec Security Response

"It's been a long time since it has been so operating system focused. In the past year, 75% of more of the bulletins have been focused on Internet Explorer, Office and some of the media players. So this month to have four of them be server-side exploits – IIS 7.0, Workstation, MSMQ and Wins – is unusual. The server-side vulnerabilities are a hacker's best friend. I have been keeping my eye out for them the past year and I have seen so few of them. It is like Microsoft software has gotten so much better, it is harder and harder to find the server-side vulnerabilities. It seems like they were all aggregated and released today. So if I am a hacker, I have quite the playground now to play in." - Eric Schultze, CTO, Shavlik Technologies

Follow John on Twitter: http://twitter.com/JohnFontana

NASA: Robots working perfectly in space mission

After five days of carrying astronauts, lifting massive pieces of equipment and "walking" up and down the spine of the International Space Station, NASA says its robots are performing perfectly in its most technically complicated mission yet.

The seven-person crew of the space shuttle Endeavour are docked and working with the crew of the space station to install the final pieces of the Japanese laboratory on the orbiter. The work, which began on Saturday, simply couldn't be done without robotic arms - one on the Endeavour and two on the space station - doing all the heavy lifting, said Michael Curie, a spokesman for NASA.

"It's very exciting to see all the robotic equipment perform to the expectations that we've all had," Curie told Computerworld. "It's wonderful when you get everyone together in space after a year or two of training, and everything they've practiced using robotics is working just as planned. It's amazing to watch it all working against the beautiful blue background of the Earth."

Holly Ridings, lead space station flight director for the Endeavour mission, said in a previous interview that this is one of the most technical ever undertaken by NASA.

The robotic arm on the space station, dubbed Canadarm II, and the robotic arm on Endeavour have been working steadily since this past Saturday when they worked hand-in-hand to unload and maneuver the final part of the Japanese Kibo lab into place. The arm on the station lifted a 4-ton piece of the Japanese complex out of the shuttle's payload bay. This piece, which has been dubbed a "front porch", will be permanently attached to the outside of the Japanese module. It is designed to hold its own payloads, as well as host experiments that need to be conducted in outer space.

Once the station's robotic arm, called the big arm, extracted the porch from the shuttle, it handed off to the space shuttle's own robotic arm. While the shuttle's arm held the porch, the station's arm moved itself about 50 feet down the length of the space station by basically moving much like a child's Slinky toy.

Either end of the big arm can be used as the base, just as either end can be used as a gripping hand. Once the arm handed off the porch, its gripper end swung over and attached to the space station and the end that was originally attached to the station let go and freed itself to be used as the gripping hand.

At that point, the big arm reached out and took back the porch and moved it into place against the Japanese module where it automatically attached itself.

"It's fabulous," said Curie. "When you consider how large and massive these objects are and how easy the robotic arms make it look, it's astounding. We're doing things today that weren't even imagined in 1981 when the shuttle program first flew."

Since then the two arms have been used to lift a massive cargo carrier out of the space shuttle and moved it to an area where the astronauts could reach it. The carrier holds equipment on its two sides. On one side a spare antennae for the station was attached, along with a spare pump for a cooling system and a motor that runs along the backbone of the station like a train on a rail. The other side of the carrier held six batteries, which are designed to hold power drawn out of the station's solar arrays. The batteries are being installed today.

With the cargo carrier out of the space shuttle, on Monday an astronaut attached himself to the end of the big arm and while he grabbed hold of a spare part, the big arm slowly and gently moved him to where the parts needed to be stowed. Taking each spare one at a time, the astronaut made three separate trips on the robotic arm over the course of five hours in space.

"It moved him slowly and methodically because of the size of the spares and the close proximity of the other pieces of the space station," said Curie. "It would not have been feasible for a spacewalker to have carried these objects from one place to another without the robotic arm. It made what would have taken three space walks possible to do in one."

Curie added that this kind of work - where a human is attached to a robot 220 miles above the Earth's surface - takes a lot of confidence in the machine

"It's not only a trust in the robotic arms but in the crew members who are using them," he noted. "The [astronauts] practice as a team for over a year to make sure they understand what is required. It looks easy because they're very proficient at it and because the hardware is very trustworthy."

On Thursday, the new robotic arm on the Japanese laboratory will be taken out for its first official spin. It's been tested but this week it will be used in a real operation for the first time. This third arm will be used to set up the new lab and get the experiments into place.

"This is the very first time that three robotic arms have been used on one mission. It won't be the last, but it will be the first," said Curie. "I think humans and robotics will be tied together as we move forward in the exploration of space."

FAQ: How to get an iPhone 3G S on Friday

If it's summer, it must be time for a new iPhone.

That's what Apple wants you to think, of course, which is why last week it pulled out the stops - those it could with master marketeer CEO Steve Jobs still on medical leave - when it introduced the newest iPhone, dubbed the "3G S."

The "S," said Apple, stands for "speed," although current iPhone owners who can't buy one at the subsidized price might instead say the "S" stands for "spendy" or "stiffed," or even worse.

Still, the now-annual event will probably draw lines at Apple's retail stores later this week, and unless Apple has filled the pipeline with more inventory than it did last year, shortages of the iPhone 3G S are likely. So how do you get your hands on one? Good opening question. We have more than just that one, though, along with answers, naturally.

Happy hunting.

When does the iPhone 3G S go on sale? In the U.S., Apple's retail stores open their doors on Friday, June 19, at 8 a.m. local time. AT&T's retail stores will open an hour earlier, at 7 a.m. local time, but only for people who pre-ordered via the Web or at a store. (According to The Boy Genius Report, AT&T had sold out its pre-order allotment by Saturday, June 13, and any subsequent pre-orders won't be shipped to a local store for pick-up until seven to 14 days after the order date.)

AT&T will let customers who didn't pre-order into its stores starting at 8 a.m. local time.

Best Buy and Wal-Mart, the other two outlets selling iPhones, will open at their usual business hours on Friday.

What countries get the iPhone 3G S on Friday? Apple said that customers in the U.S., Canada, France, Italy, Spain, Switzerland and the U.K. get first crack at the new iPhone. Other countries will begin to sell their shipments of the upgrade starting July 9.

What's the easiest way to get an iPhone 3G S? Apple is taking pre-orders for both models of the iPhone 3G S on its Web site, and claims that it will deliver the phone to customers by June 19 (that's what it said as of Saturday, June 13, anyway). It's taking only single-line pre-orders, however, so people who want two or more iPhones, or want to add a line to an existing account, are somewhat out of luck: Apple will reserve ordered iPhones, but you'll have to retrieve it by going to a retail store.

During the week of the iPhone 3G S' unveiling, AT&T also took pre-orders on its Web site as well as from walk-ins, telling the former that their iPhones would be shipped via two-day priority, and the latter that they would have to come to a nearby store. By Saturday, June 13, however, a message on AT&T's site read: "Pre-orders for iPhone 3G S will ship 7 to 14 days after your order is placed. Orders will be shipped on a first-come, first-served basis."

Best Buy, which was also taking pre-orders from walk-in customers, said on Saturday that they too had sold out their pre-order inventories. Wal-Mart is not taking pre-orders, according to conversations with salespeople at several of its stores.

Okay, looks like I'm standing in line. Will there be one Friday? Does Steve Jobs wear black? Does Steve Wozniak dance like a "Teletubby going mad?"

People who pre-ordered through AT&T, however, will get preferential treatment. The carrier has said it will start serving them an hour before than other customers. The only concession Apple's making is to open its retail stores an hour earlier than usual.

What about Best Buy and Wal-Mart? "We haven't been told anything about a camp-out or an early opening," said a Best Buy sales representative Saturday. The word from Wal-Mart was essentially the same.

What will I pay for the iPhone 3G S? Good question. Frankly, that depends on your relationship with AT&T, and if you own an older iPhone. If you're currently not an AT&T customer, you're green: You qualify for the subsidized prices of $199 for the 16GB model and $299 for the 32GB version.

If you are a current iPhone owner, or an AT&T customer who uses another type of phone, you may be eligible for the $199/$299 prices, but there's a good chance you're not. Depending on your situation, you may have to pony up an additional $200, putting the prices at $399 and $499.

And that's made a lot of peoplehot under the collar.

Depending on your situation? What does that mean? On which way the wind's blowing or whether your name starts with "Q."

Okay, both of those are a stretch, but eligibility for the subsidized prices seems to be a state secret at AT&T. Generally speaking, you're more likely to qualify for the $199/$299 prices the closer you are to the end of your current service contract, although there seems to be other criteria in play.

Typically, U.S. consumers must fulfill their contract - two years is the general rule - before they're eligible to get a new phone for free or purchase one at a subsidized price. AT&T's not breaking new ground here.

That hasn't stopped more than 12,500 people - as of Sunday - from signing a Twitter petition calling on AT&T to let existing customers purchase the iPhone 3G S at the same price as new customers. Nor did it stop a crisis communications expert from saying that the company had to move quickly to quell the revolt or risk alienating thousands of people who will jump ship if Apple ends AT&T's exclusive deal next summer. His 48-hour deadline, however, has come and gone, with not a peep out of A&T.

Current AT&T customers can determine their eligibility for the iPhone 3G S' discounted prices online by logging in to their wireless account.

How long will it take to get an iPhone? Not even Tim Cook, the chief operating officer running the company while Jobs gets well, knows that. But we're betting the inside-the-store time will be shorter than last year, when the activation servers crashed about an hour-and-a-half into sales on launch day, and Apple and AT&T sent users home with an expensive brick.

That's because Apple and AT&T have discarded in-store activation, a requirement last year, and will let buyers walk out with an iPhone 3G S, then activate it later in the comfort of their own home via iTunes.

Apple re-instituted online sales about a month ago, but gave the same caveat then as it is now: Only new, single-line accounts are eligible. AT&T started selling iPhones online and allowing at-home activation using iTunes in December 2008.

If iTunes activation sounds familiar, it should: That's the process Apple used in 2007 when it launched the first-generation iPhone. But iTunes activation wasn't completely painless either. Plenty of buyers who had queued up in long lines found that when they finally got home, iTunes was dead in the water. AT&T, not surprisingly, blamed Apple, saying that its partner's servers had melted under the strain.

Should you expect a repeat of 2007? Impossible to know. Cynics like us expect the worst, so when things do go smoothly - and sometimes they do, you know - we get the momentary high of having won a round in the battle against technology.

What do I bring with me to the store? To get out of the store with an iPhone 3G S, you'll need a credit card to pay for the phone. Later, during the iTunes activation process, you'll need your Social Security number for the credit check required by AT&T. If you're ordering online, you need all of that to replace an existing iPhone; to open a new account, you'll also need to enter your billing address and date of birth.

To transfer an existing number to the new iPhone 3G S, you'll also need your current cell number and password or PIN to that account.

Think I'll wait a few days to buy. Is there a way to tell whether the iPhone 3G S is in stock before I drive to the store? (I'm trying to reduce my carbon footprint.) Apple will undoubtedly fire up its inventory tool - which it's used the last two years to report stores with models in stock - so you can check online before you leave the house.

The availability tool will be here when the iPhone 3G S launches Friday. It will reflect next-day's status after 9 p.m. local time for the store you're checking. AT&T didn't have anything like that last year, when it told customers to call local stores before driving. Expect the same this time.

How will I transfer settings, e-mail accounts and text messages from an older iPhone to my new iPhone 3G S? Apple's not posted an updated support document to describe the process - which involves syncing the older iPhone, then restoring the backup to the iPhone 3G S using iTunes - but it's probably going to be identical or at least similar to the instructions from last year.

That support document is available here.

They have this recession goin' on, in case you haven't heard. Anything for someone like me who has an iPhone but doesn't want to shell out $200 (or more) for a new one? You get iPhone 3.0, the newest upgrade to the iPhone's software, which Apple previewed last March and talked up more last week. iPhone 3.0 launches on Wednesday, June 17. Among the new features: copy-paste, MMS, Spotlight search and landscape-mode keyboard.

iPhone 3.0 is free to iPhone owners, but costs iPod Touch users $9.95.

Apple and AT&T also reduced the price of the existing iPhone 3G 8GB to $99, so if you have $100 and an aging first-generation iPhone, you can go that route. Have a little more cash? For $149, you can pick up the 16GB iPhone 3G from AT&T and Best Buy.

Those reduced-price iPhones are available now.

Microsoft picks Bing as name for new search engine

Microsoft has picked Bing as the branding for its new search engine, putting to rest months of speculation of what the next iteration of Live Search would be called.

Microsoft CEO Steve Ballmer revealed Bing at the D7 conference in California Thursday; the company said it will roll out the product over the next several days until it is fully available to everyone on Wednesday.

Bing and Kumo were the names the company was considering for the new search engine, but in recent days speculation grew that Bing was the front-runner. Microsoft confirmed earlier this year it was testing a search engine called Kumo, a Japanese word that can mean "cloud," internally, but the company never confirmed the official name for its new engine.

Highlights from an interview with Ballmer at D7 are posted on the conference Web site. Microsoft also demonstrated the new search engine at the conference.

Microsoft said it has designed Bing as a "decision engine" to help people search the Web more intelligently and to simplify everyday tasks such as getting directions, and said the tool is aimed at giving people more ways to organize search results to their preferences.

For example, Bing includes a set of navigation and search tools called an Explore Pane on the left side of the page that offers a feature called Web Groups, which organizes search results not only in the pane but also in the actual results generated on the page.

Microsoft also has added Related Searches and Quick Tabs features, which provide a table of contents for different categories of search results.

Bing also helps people find what the engine considers the most relevant results by highlighting them in various ways, according to Microsoft. A feature called Best Match surfaces what the engine considers the best result for a search query and calls it out for the user. Another feature called Deep Links gives people more insight into what resources a site offers.

Bing also offers what is called Quick Preview, which gives a brief preview of search results in a box that appears when someone mouses over the search results link. The preview gives a snapshot of the information the link provides so people can decide whether they want to click on it.

Bing also includes one-click access to information through an Instant Answers feature. Microsoft said it designed this to help users find information quickly within the body of a search page so they do not have to use additional clicks to get what they are looking for.

Microsoft has redesigned its search engine in the hopes of closing the gap with Google, which has the lion's share of the search queries. It's also been reported that the company is spending US$80 million to $100 million to promote Bing. Google currently has about 80 percent share of all online searches to Microsoft's 6 percent, according to most analysts, who have said that it will take revolutionary features for people to switch from Google to another search engine.

People can find out more about Bing and give it a test run online.