Gates threatens to buy millions and millions of servers for Microsoft

While Google builds its own gear


Say what you will about Bill Gates. The man is consistent.

Gates' farewell speech at this week's TechEd conference closed out his full-time role at Microsoft with the usual thud. The man has a gift for stating the obvious and detailing what's to come when it's already here.

This time around, Gates went after The Cloud. And how could he resist? Nothing receives as much hype as the SaaSy world of tomorrow.

Microsoft is all about The Cloud and shipping what used to be boxed-up software as a service. Never mind that the shift to such a software delivery method threatens to place a death grip on Microsoft's bottom line. And please ignore the fact that Microsoft doesn't have any very good answers about how to fix this problem. Just remember that it's all about The Cloud. Big time.

You guys read a lot about Google and Microsoft building $500m data centers. These beasts pop up in Podunk towns around the US faster than Dairy Queens and Wal-Marts. In reality, though, you ain't seen nothing yet.

"We're taking everything we do at the server level, and saying that we will have a service that mirrors that exactly," Gates said at TechEd. "The simplest one of those is to say, okay, I can run Exchange on premise, or I can connect up to it as a service. But even at the BizTalk level, we'll have BizTalk Services. For SQL, we'll have SQL Server Data Services, and so you can connect up, build the database.

"It will be hosted in our cloud with the big, big data center, and geo-distributed automatically. This is kind of fascinating because it's getting us to think about data centers at a scale that never existed before. Literally, today we have, in our data center, many hundreds of thousands of servers, and in the future, we'll have many millions of those servers."

Many millions? Holy hell.

Well, like, you'll need to create special servers to function on that scale, right? General purpose gear meant for the Fortune 10,000 abyss simply won't do. Any bright ideas on how to solve that one, Bill?

"When you think about the design of how you bring the power in, how you deal with the heating, what sort of sensors do you have, what kind of design do you want for the motherboard, you can be very radical, in fact, come up with some huge improvements as you design for this type of scale," Gates said. "And so going from a single server all the way up to this mega data center that Microsoft and only a few others will have, it gives you an option to run pieces of your software at that level.

"You'll have hybrids that will be very straightforward. If you want to use it just for an overload condition, or disaster recovery, but the software advances to make it so when you write your software you don't have to care where those things are located, those are already coming into play. So the services way of thinking about things is very important, and will cause a lot of change."

See what we mean about the thud? Here's Gates giving his long goodbye and basically outlining what Google has been up to for years.

We may be underplaying the customization work that Microsoft already has underway, but we doubt it. The company buys its gear from the Tier 1 set and Rackable Systems. It's pretty well locked into the old model, while Google has managed to turn Intel of all companies into a custom design house, with the chip maker crafting bespoke motherboards. Google's also building its own switches and doing unspeakable things with disks.

The closest Microsoft has come to radical - at least in public - is with a new data center in Chicago that will center on data centers in containers. The likes of Rackable and Dell are falling over themselves to design something to Microsoft's liking, and Microsoft looks set to create a very dense and energy efficient data plant.

At what point will Microsoft take things to the next level and go Googlesque by demanding even more from the server vendors or by turning into its own server shop? Moving millions and millions of servers to Redmond seems to make little since for the vendors selling them, since that type of volume will kill any available margins. Meanwhile, Microsoft might find that the paperwork alone on such purchases is more of a pain than just cobbling together the kit in-house.

We're not sure if Microsoft will choose the build-over-buy option, but we are sure that it will be well behind the rest of its competitors with the "right" decision. That's the kind of culture Bill has left. ®

Scientists find way of protecting computers against virus

Code Red, a virulent computer virus, wreaked havoc, infecting more than 350,000 machines in 14 hours in 2001, besides causing a worldwide loss of $2.6 billion.

Now techies at Ohio State University have discovered a way to contain worms like Code Red, which blocked network traffic to subway stations and 911 call centres in the US, and also sought to target the White House website.

"We wanted to find a way to catch infections in their earliest stages, before they get that far,' said Ness Shroff, who led the team that worked on the project.

'These worms spread very quickly. They flood the net with junk traffic, and at their most benign, they overload computer networks and shut them down,' said Shroff.

The key, Shroff and his colleagues found, is software to monitor the number of scans that machines on a network send out. When a machine starts sending out too many scans - a sign that it has been infected - administrators should take it off line and check it for viruses.

This would help network administrators to isolate infected units and sequester them for repairs.

In the simulations pitted against the Code Red worm, they were able to prevent the spread of the infection to less than 150 hosts on the whole Internet, 95 percent of the time.

The strategy sounds straightforward enough. A scan is just a search for Internet addresses like we do on Google.

The difference is that a virus sends out many scans to many different destinations in a very short period of time, as it searches for machines to infect. 'The difficulty was figuring out how many scans were too many,' Shroff said.

Shroff was working at Purdue University in 2006 when doctoral student Sarah Sellke suggested making a mathematical model of the early stages of worm growth.

With Saurabh Bagchi at Purdue they developed a model that calculated the probability of the virus spread, depending on the maximum number of scans allowed before a machine was taken off line.

In simulations, they pitted their model against the Code Red worm, as well as the SQL Slammer worm of 2003, limiting the number to 10,000 because it is well above the number of scans that a typical computer network would send out in a month.

'An infected machine would reach this value very quickly, while a regular machine would not,' Shroff explained. 'A worm has to hit so many IP addresses so quickly in order to survive.'

These findings have been described in current issue of IEEE Transactions on Dependable and Secure Computing.

AMD: New Chips Consume Half the Power of Core 2 Duo

AMD announced its entry into the 65nm manufacturing generation Tuesday with a new line of 65-watt "energy-efficient" processors that the company claimed already consumes just under 50 percent less power than the Intel Core 2 Duo .

AMD's novel argument provided a backdrop for four new chips—the AMD Athlon 64 X2 4000+, 4400+, 4800+, and 5000+—will be sold for the same price as their older counterparts, which were fabricated on the 90nm process. The Athlon 64 X2 line will receive the 65nm conversion treatment first, which will be completed by the first quarter of 2007 in its Fab 36 in Dresden.

AMD's notebook and server processor lines will receive the same 65nm treatment, which will be completed some time in 2007, according to Jack Huynh, responsible for marketing and business development at AMD's desktop division. AMD's standard Athlon 64 and Sempron lines will lag behind the X2's conversion, as they are not "mainstream" parts, Huynh said.

Pointer Graphic for Fingerlinks
Read about AMD's new Quad FX Architecture.

Shifting to a finer manufacturing process means less power and waste heat is needed to run at a given speed. In desktops, that means that the chip can be clocked faster while still maintaining the given power; in notebooks, the overall power consumption can be reduced while still maintaining a given speed. AMD's energy-efficient chips split that difference, offering power savings and a quieter desktop PC environment.

"With the Vista rollout, it's more and more important to multitask and multicore without a super loud box—that's the end goal," Huynh said.

In May, AMD announced an energy-efficient processor roadmap, up to the 4800+ chip, that established a 65-watt power threshold. Certain other Athlon 64 X2 processors, including the 3500+ and 3800+, were also classified as "small form factor" energy-efficient processors and designed to run at a maximum of 35 watts. All of the new 65nm energy-efficient chips are classified to run at 65 watts maximum power.

But AMD's sales team is also attempting to convince customers that even its older "Rev. F" 65-watt, 90nm chips actually consume less power than Intel's Core 2 Duo components, with the delta even more magnified when its new 35-watt, 65nm chips are compared.

AMD's argument goes like this: Modern desktop and notebook processors constantly scale up and down between full speed and an idle state, which AMD has branded "Cool 'n' Quiet". At a given time, pushed to full load by an application, AMD's chips run hotter and consume more power. But across a typical computing day—where a user might check his email or surf the Web—the processor idles more often then not. At idle, AMD's 90nm Athlon 64 X2 consumes 7.5 watts. A 35-watt, 65-nm chip will idle at 3.8 watts, AMD said. By comparison, the 65nm Core 2 Duo idles at 14.3 watts.

AMD's 90nm/65 watt Athlon 64 X2 chips consumed 47.6 percent the power of a 65nm Core 2 Duo chip, the company said. A 35-watt X2 consumes 73.3 percent of the power of the same Core 2 Duo. However, directly comparing the two chips' power load in a real-world computing environment, over the course of a day, would be a daunting task, Huynh acknowledged.

Comparing processors by the power consumed has either been done using real-world measurements or via a number called "Thermal Design Power," a guideline given to engineers that estimates the maximum amount of power that a chip would consume. Huynh called TDP and process technology comparisons "purely a numbers game".

"We don't want to get caught in the processor technology game," Huynh said. "We have superior power management features than our competition."

The new processors meet or exceed the new Energy Star requirements for idle power consumption, which go into effect in July 2007, Huynh added.

Huynh also discouraged those who might hope that AMD might create a low-power version of the Quad FX or "4x4" platform, which was criticized for its high power consumption. "We always have to look at all the options, but that [the Quad FX] roadmap is 125-watts, extended through next year," he said.

The 65nm development work was researched in conjunction with IBM. Normally, a processor conversion would take nearly a full year; AMD's goal is to complete it within about half that time, Huynh said. The next step? To catch up with Intel on 45nm, which last week announced test samples of its 45nm "Penryn" processors. AMD's goal is to catch up with Intel in 18 months, he said.

Clarification: Although AMD's latest 65-nm energy-efficient chips announced in this article all run at 65 watts TDP, the company included power numbers based on a 35-watt chip as well.

Fastest Mac ever

Up to 2x faster.

Eight-core processing power was once only top-of-the-line. Now it comes standard. This time around, performance is more phenomenal than ever — up to two times faster than the previous standard-configuration Mac Pro.1 And with the multicore technology enhancements of Mac OS X Leopard, the new Mac Pro is a force to be reckoned with.

More power with less power.

Inside the new Mac Pro is the latest technology from Intel: Quad-Core Intel Xeon “Harpertown” processors. These processors run at blazingly fast speeds up to 3.2GHz. Based on the new 45-nm Intel Core microarchitecture, they deliver amazing performance but still maintain outstanding energy efficiency.

CPU Architecture

Cache count.

A huge amount of L2 cache — 12MB per processor — keeps frequently used data and instructions close to the processor cores and improves overall performance. 6MB of cache is shared between pairs of processor cores, allowing an individual core to use all the available shared cache at any one time.

Built at full tilt.

With the fastest Xeon architecture available, the new Mac Pro features 1600MHz dual independent frontside buses. These 64-bit buses give each processor a direct connection to the system controller and deliver improved processor bandwidth of up to 25.6GB per second — 20 percent greater than the previous Mac Pro. With a new system architecture, speedier system buses, and fast 800MHz DDR2 fully buffered DIMM memory, Mac Pro memory throughput is up to 1.6 times faster than before.2

Every Intel Xeon processor features an enhanced SSE4 SIMD engine. Capable of completing 128-bit vector computations in a single cycle, SSE4 is ideal for transforming large sets of data, such as applying a filter to an image or rendering a video effect.

Smarter memory.

The Mac Pro incorporates a 256-bit-wide, fully buffered memory architecture with Error Correction Code (ECC), which corrects single-bit errors and detects multiple-bit errors automatically. These features are especially important in mission-critical or compute-intensive environments. Apple designed a more stringent thermal speciļ¬cation for the Mac Pro FB-DIMMs, so the internal fans spin at slower speeds and keep the system quiet.

How Microsoft can 'kill' Google

When Steve Ballmer yelled at a departing Microsoft employee that he would “kill Google” we had no idea just how direct a method he had in mind. Buying all or part of AOL may be the first part of the master plan, as Google relies heavily on the advertising pages that come from Yahoo!, since it now syndicates its search to Google.

One estimate suggested that Google would lose as much as $380m of advertising revenue if AOL dropped its search engine and took on MSN's. That would cut Google’s profit by something like 25 per cent, potentially giving its huge share price something of a tumble. No wonder Google is thought to be entering the bidding to partner with Time Warner on AOL instead of Microsoft.

However, the move by Microsoft could still potentially backfire, although with its cash mountain you would expect it to win the day. Google only chance is to paint a sufficiently rosy future picture to Time Warner’s management about what kind of outcome there would be for an AOL partnering Google, then perhaps a lot more than that $380m could be saved.

For instance the new physical fiber network that Google is believed to be in the process of putting together, be used to transport more than just voice, advertising and wi-fi traffic. This could also become a conduit for video services, providing another route to market for the remainder of Time Warner’s content? Could the Google Video search capability index all of Time Warner’s precious content and give it another lease of life?

It’s too late for the Google Talk VoIP service to go out to all the AOL customers because AOL has launched its own complete VoIP package service. The AOL Time Warner merger had some original logic and perhaps a company as imaginative as Google could make that logic work.

On the other hand Microsoft in June 2003 paid Time Warner $750m, mostly in settlement of legal disputes, from when AOL inherited the complaints of Netscape when it bought that company right in the middle of the Microsoft anti-trust trial. But the deal also gave AOL rights to use certain Microsoft tools and the two said that they would collaborate on long-term digital media initiatives, some of which they are well into.

That agreement was certainly not a mere settlement of differences but included the Free use of Internet Explorer by AOL for seven years, collaboration on Windows Media player and DRM software and early access to Microsoft technology for AOL.

And since then the two companies, Time Warner and Microsoft, have become almost inextricably interlinked, working together on standards and buying into companies like ContentGuard together.

So Microsoft must be ahead on this deal as it comes to the table and has the money to tempt Time Warner.

The New York Post has been painting the deal as if it was a 50-50 partnership, with Microsoft buying half of AOL with other statements suggesting that the deal is nothing like that adventurous and is just a form of marketing co-operation.

Yahoo! also has time to throw its hat in the ring, and discussion between it and Time Warner has also been reported. AOL has been losing subscription customers rapidly, which is why it recently switched its business from purely subscription based to increasingly advertising-based.

Intel launches Bluetooth-killer

Intel plans to challenge Bluetooth short-range wireless spec with a new wireless technology called Wi-Fi PAN.

The technology, which was developed by Ozmo Devices, will allow peripherals including wireless headsets, keyboards and mice to be connected to laptops and mobile phones through a standard Wi-Fi network, removing the need for a separate Bluetooth antenna.

Ozmo Devices said Wi-Fi PAN operates at a similar nine-meter range to Bluetooth but claims data transfer rates are three times faster 9Mb per sec.

Wi-Fi PAN is expected to be made available to consumers next year.

What's New in Firefox 3

Firefox 3 is based on the Gecko 1.9 Web rendering platform, which has been under development for the past 34 months. Building on the previous release, Gecko 1.9 has more than 14,000 updates including some major re-architecting to provide improved performance, stability, rendering correctness, and code simplification and sustainability. Firefox 3 has been built on top of this new platform resulting in a more secure, easier to use, more personal product with a lot more under the hood to offer website and Firefox add-on developers.

More Secure
  • One-click site info: Click the site favicon in the location bar to see who owns the site and to check if your connection is protected from eavesdropping. Identity verification is prominently displayed and easier to understand. When a site uses Extended Validation (EV) SSL certificates, the site favicon button will turn green and show the name of the company you're connected to. (Try it here!)
  • Malware Protection: malware protection warns users when they arrive at sites which are known to install viruses, spyware, trojans or other malware. (Try it here!)
  • New Web Forgery Protection page: the content of pages suspected as web forgeries is no longer shown. (Try it here!)
  • New SSL error pages: clearer and stricter error pages are used when Firefox encounters an invalid SSL certificate. (Try it here!)
  • Add-ons and Plugin version check: Firefox now automatically checks add-on and plugin versions and will disable older, insecure versions.
  • Secure add-on updates: to improve add-on update security, add-ons that provide updates in an insecure manner will be disabled.
  • Anti-virus integration: Firefox will inform anti-virus software when downloading executables.
  • Vista Parental Controls: Firefox now respects the Vista system-wide parental control setting for disabling file downloads.
  • Effective top-level domain (eTLD) service better restricts cookies and other restricted content to a single domain.
  • Better protection against cross-site JSON data leaks.
Easier to Use
  • Easier password management: an information bar replaces the old password dialog so you can now save passwords after a successful login.
  • Simplified add-on installation: the add-ons whitelist has been removed making it possible to install extensions from third-party sites in fewer clicks.
  • New Download Manager: the revised download manager makes it much easier to locate downloaded files, and you can see and search on the name of the website where a file came from. Your active downloads and time remaining are always shown in the status bar as your files download.
  • Resumable downloading: users can now resume downloads after restarting the browser or resetting your network connection.
  • Full page zoom: from the View menu and via keyboard shortcuts, the new zooming feature lets you zoom in and out of entire pages, scaling the layout, text and images, or optionally only the text size. Your settings will be remembered whenever you return to the site.
  • Podcasts and Videocasts can be associated with your media playback tools.
  • Tab scrolling and quickmenu: tabs are easier to locate with the new tab scrolling and tab quickmenu.
  • Save what you were doing: Firefox will prompt users to save tabs on exit.
  • Optimized Open in Tabs behavior: opening a folder of bookmarks in tabs now appends the new tabs rather than overwriting.
  • Location and Search bar size can now be customized with a simple resizer item.
  • Text selection improvements: multiple text selections can be made with Ctrl/Cmd; double-click drag selects in "word-by-word" mode; triple-clicking selects a paragraph.
  • Find toolbar: the Find toolbar now opens with the current selection.
  • Plugin management: users can disable individual plugins in the Add-on Manager.
  • Integration with Windows: Firefox now has improved Windows icons, and uses native user interface widgets in the browser and in web forms.
  • Integration with the Mac: the new Firefox theme makes toolbars, icons, and other user interface elements look like a native OS X application. Firefox also uses OS X widgets and supports Growl for notifications of completed downloads and available updates. A combined back and forward control make it even easier to move between web pages.
  • Integration with Linux: Firefox's default icons, buttons, and menu styles now use the native GTK theme.
More Personal
  • Star button: quickly add bookmarks from the location bar with a single click; a second click lets you file and tag them.
  • Tags: associate keywords with your bookmarks to sort them by topic.
  • Location bar & auto-complete: type in all or part of the title, tag or address of a page to see a list of matches from your history and bookmarks; a new display makes it easier to scan through the matching results and find that page you're looking for. Results are returned according to their frecency (a combination of frequency and recency of visits to that page) ensuring that you're seeing the most relevant matches. An adaptive learning algorithm further tunes the results to your patterns!
  • Smart Bookmarks Folder: quickly access your recently bookmarked and tagged pages, as well as your more frequently visited pages with the new smart bookmarks folder on your bookmark toolbar.
  • Places Organizer: view, organize and search through all of your bookmarks, tags, and browsing history with multiple views and smart folders to store your frequent searches. Create and restore full backups whenever you want.
  • Web-based protocol handlers: web applications, such as your favorite webmail provider, can now be used instead of desktop applications for handling mailto: links from other sites. Similar support is available for other protocols (Web applications will have to first enable this by registering as handlers with Firefox).
  • Download & Install Add-ons: the Add-ons Manager (Tools > Add-ons) can now be used to download and install a Firefox customization from the thousands of Add-ons available from our community add-ons website. When you first open the Add-ons Manager, a list of recommended Add-ons is shown.
  • Easy to use Download Actions: a new Applications preferences pane provides a better UI for configuring handlers for various file types and protocol schemes.
Improved Platform for Developers
  • New graphics and font handling: new graphics and text rendering architectures in Gecko 1.9 provides rendering improvements in CSS, SVG as well as improved display of fonts with ligatures and complex scripts.
  • Color management: (set gfx.color_management.enabled on in about:config and restart the browser to enable.) Firefox can now adjust images with embedded color profiles.
  • Offline support: enables web applications to provide offline functionality (website authors must add support for offline browsing to their site for this feature to be available to users).
  • A more complete overview of Firefox 3 for developers is available for website and add-on developers.
Improved Performance
  • Speed: improvements to our JavaScript engine as well as profile guided optimizations have resulted in continued improvements in performance. Compared to Firefox 2, web applications like Google Mail and Zoho Office run twice as fast in Firefox 3, and the popular SunSpider test from Apple shows improvements over previous releases.
  • Memory usage: Several new technologies work together to reduce the amount of memory used by Firefox 3 over a web browsing session. Memory cycles are broken and collected by an automated cycle collector, a new memory allocator reduces fragmentation, hundreds of leaks have been fixed, and caching strategies have been tuned.
  • Reliability: A user's bookmarks, history, cookies, and preferences are now stored in a transactionally secure database format which will prevent data loss even if their system crashes.

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.