Tag Archive for 2009

IPv6 Has a Business Case

According to a Network World article business incentives are completely lacking today for upgrading to IPv6. The next-generation Internet protocol does not have a reason to be, according to a survey of network operators conducted by the Internet Society (ISOC).

In the report, ISOC says that ISPs, enterprises, and network equipment vendors report that there are “no concrete business drivers for IPv6.” However, survey respondents said customer demand for IPv6 is on the rise. They are planning or deploying IPv6 because they feel it is the next major development in the evolution of the Internet. All of the ISOC survey respondents said they are planning for IPv6, and most have begun deployment.

IPv6 deployment remains spotty, even for organizations committed to the technology, the survey found. When asked how they were deploying IPv6, a little over half said they were deploying IPv6 on parts of their network rather than their whole network. Several respondents said they envision parts of their networks never operating with IPv6.

What’s driving network operators to IPv6 is demand from customers rather than IPv4 address depletion. The survey found almost half of the respondents report customer pressure to migrate to IPv6. Fewer respondents indicated a need for additional address space or the desire for simpler addressing or less complexity on their networks.

According to the survey, 77% of the respondents are using dual-stack, running IPv4 and IPv6 side-by-side. 45% of respondents used some kind of tunneling to implement IPv6 on top of their existing IPv4 networks. However, tunneling was largely viewed as a temporary measure that either had been phased out or would be phased out in the near future. Tunneling will be turned off when their upstream networking provider offered native IPv6 service. 45% of respondents stated that they had part of their network running a native IPv6 deployment.

More than half of the survey respondents said that additional address space is the primary motivator for IPv6. Network operators put less weight on the auto-configuration, built-in security, and mobility features that are found in IPv6.

rb-

The Network World article misses the point. The article does note that ISOC contracted 90 members and only twenty-two organizations responded for a response rate of less than 25%. Not the best body of work to declare there is no business reason to deploy IPv6.

Experts predict IPv4 addresses will be gone by 2012. At that point, all ISPs, government agencies, and corporations will need to support IPv6 on their backbone networks.  IP addresses are like crude oil, there is only so much of it around. Scarce resources cost more as the resource pool decreases.

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Energy Star for Servers Released

Energy Star for Servers ReleasedThe U.S. Environmental Protection Agency released an Energy Star specification for computer servers on May 15, 2009. This new specification covers standalone servers with one to four processor sockets is in part a reaction to estimates that by 2011, IT equipment is expected to account for 3 percent of all U.S. electricity consumption, according to the EPA.

EPA logoAndrew Fanara of the Energy Star product development team helped spearhead the process of getting a spec for servers told DataCenter News. “EPA believes this new server spec is an important first step to help attract attention to the need and opportunity to cut cost and save energy in federal data center facilities, especially during a time of tight budgets,” Fanara told GCN.

The new specification includes:

  • Power supply efficiency requirements which should increase efficiency and reduce waste heat
  • Power consumption limits for when the server is idle
  • Single-socket server are limited to 60 watts
  • 2-3 socket servers are limited to 151-221 watts
  • Allowances for additional installed components
  • Power and performance data sheet  detailing power consumption  in a common format
  • Ability to report energy-related statistics to data center management software.

Vendors Respond to Energy Star for Servers

HP logoMajor server manufacturers are already submitting their products for Energy Star approval. HP says that two of its most popular servers, the DL360 and DL380 G6 are now Energy Star compliant with more servers added to the list soon.

IBM‘s next-generation Power6 processor has power management abilities that let it drop down to a 100-watt level.

IBM logoJay Dietrich, program manager at IBM’s corporate environmental affairs group told GCN,“Overall, we think that there has been good progress on the server requirements, and we think EPA has done some good work in getting that specification focused on the issues.”

NDell logoot to be left out, Dell launched an energy-efficient server line in December. Dell touts it’s PowerEdge Energy Smart 1950 III and 2950 III servers as the Dell green alternatives.

Sun Microsystems has touted the energy efficiency of its UltraSparc T1 “Niagara”-based servers for a while . The Niagara CPU typically uses 72 watts of power at 1.4 GHz.

Criticism of Energy Star for Servers

Sun logoThe new Energy Star criteria has its critics. The biggest complaint is that a qualifying server need only show energy efficiency when it’s in idle, powered on but doing no work. This is like comparing the mile per gallon of a Hummer and a Prius sitting at a stop light. Both use a similar amount of fuel idling, not going anywhere. Many argue that the amount of energy spent idling is less important than how many miles per gallon the vehicle gets while driving, doing its work.

However, firms are becoming increasingly aware of this issue and are addressing it. Organizations are deploying virtualization to cut underutilized servers to get as much performance per watt as possible from their hardware. In most IT organizations there are underutilized servers which spend a great deal of time idling, so idle server power consumption is relevant but not the whole story. Servers are not like desktop or laptop computers because they are not meant to be idle. Instead, they are designed to be highly utilized and available. “A heavily utilized server is much more energy effective than a small server running at very low utilization rates,” Albert Esser, vice president of data center infrastructure at Dell told GCN.

Subodh Bapat, a distinguished engineer at Sun explained to Data Center News another drawback to the program: It doesn’t take into account how many cores per processor a machine has. “The fact is, when you go from a server that has four processors with two cores each to two processors with four cores each, you save energy. That’s not recognized by the spec,” he said. “If you’re shipping a server with one processor, it doesn’t matter if you have one core or two cores or four or eight. You still get the same idle power allowance. There’s no benefit for the fact that you can do, say, eight times work with a fewer number of watts.”

“This is a great first step, but it’s not a complete spec,” says Bapat. “It’s a good start toward finding out which servers are better than others on an energy basis.” Bapat wasn’t entirely critical about the Energy Star program for servers. For example, a compliant server must be capable of measuring real-time environmental data . “Transparency is always a good thing. Energy Star requires the ability to report power consumption data pretty much across the range of utilization and at all times that the server is on. If you want to know how much [power is being consumed], you should be able to ask it and it should tell you. That’s a very useful feature.”

EPA Responds

Energy Star logoThe Tier 2 Energy Star specification will cover servers with more than four processor sockets, blade servers and fault-tolerant machines is expected in October 2010. The Tier 2 spec will also define a metric that compares server performance with energy consumption. EPA’s Fanara speculates that finding the magic numbers,  could take a while. The EPA is developing an Energy Star spec for data center facilities and is collecting data from volunteering data centers now. Mr. Fanara said his group also hope to have a framework document for an Energy Star for data storage equipment out in June 2009.

EPA introduced Energy Star in 1992 as a voluntary program to reduce greenhouse gas emissions through energy efficiency. The Energy Star label can be found on more than 50 kinds of products, new homes and commercial and industrial buildings. Energy Star is the EPA labeling program designed to help consumers pick out energy-efficient products. If a manufacturer qualifies its product, it can place an Energy Star label on it, and the product information can also be displayed on the manufacturer’s and the Energy Star Website.

rb-

I agree with Sun’s Bapat that the current version of the Energy Star requirements for servers is a good first step. Just like any 1.0 version release, there is still a lot of work to be done.

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers and anything else that catches his attention since 2005. You can follow him at LinkedInFacebook and Twitter. Email the Bach Seat here.

Server Sprawl

Data Center KnowledgeServer Sprawl reports on an interesting survey from Netcraft. Netcraft has developed a technique for identifying the number of computers (rather than IP addresses) acting as web servers on the internet. They can then attribute these servers to hosting locations through reverse DNS lookups. This provides an independent view with a consistent methodology worldwide on the numbers of web servers, the rate of growth over time, and the operating systems and web server technology used at each hosting company worldwide.

Through an analysis of public reports and the Netcraft server count, Data Center Knowledge developed a list of organizations with a large number of servers.

Number of servers

The Data Center Knowledge article goes on to speculate on the degree of server sprawl at some of the more secretive firms:

There’s a widely circulated estimate of 450,000 servers, but that number is at least three years old. If it was ever correct, it certainly isn’t anymore, given Google’s data center building spree. Google’s recently revealed container data center holds more than 45,000 servers, and that’s a single facility built during 2005.

There are actually some numbers on Microsoft’s server count, but it’s also dated. Screenshots from the company’s data center management software suggest that Microsoft was running about 218,000 servers in mid-2008. The company’s new Chicago container farm will hold up to 300,000 servers, so the count will change rapidly when that facility is deployed.

Amazon says very little about its data center operations, but we know that it bought $86 million in servers from Rackable in 2008, and stores 40 billion objects in its S3 storage service.

With more than 160 million active users between its online auction house and PayPal payment service and 443 million users on Skype, eBay has a massive data center infrastructure. The company houses more than 8.5 petabytes of data in huge data warehouses. We’re not certain what kind of server count this requires, but it’s certainly in the 50,000 club.

The third major search portal likely has more than 50,000 servers in operation to support its large free hosting operation as well as its paid hosting service and Yahoo Stores.

It’s the world’s largest domain registrar with more than 35 million domains under management, but effective cross-selling of its hosting plans has also made GoDaddy one of the largest shared hosting operations in the world. Its infrastructure is probably similar in scope to that of 1&1 Internet.

While server “ownership” is less distinct with system integrators, EDS has an enormous data center operation. Company documents say EDS is managing 380,000 servers in 180,000 data centers.

With more than 8 million square feet of data center space, IBM also houses an enormous number of servers in its data centers, both for itself and its customers.

Facebook says only that it has more than 10,000 servers, but it’s been saying that since April 2008 and it’s now serving 200 million users and hosting at least 40 billion photos. Facebook is clearly way beyond 10,000 servers.

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFaceboo and Twitter. Email the Bach Seat here.

Lessons From Botnet Demise

Lessons From Botnet DemiseBrian Krebs on the Washington Post blog Security Fix profiled a case where a bot-herder killed 100,000 zombie clients in his botnet. The bot-herder implemented a “kill operating system” or kos command resident in the Zeus bot-net crimeware. The kos command caused the infected PCs to Blue Screen of Death (BSOD). The Madrid-based security services firm S21sec reports that invoking the kos command only results in a blue screen and subsequent difficulty booting the OS. There appears to be no significant data loss and neither the Trojan binaries nor the start-up registries are removed, In this post, they look at what happens to an infected computer when it receives a Zeus kos.

Russian botnet

The Zeus crimeware was designed by the Russian A-Z to harvest financial and personal data from PCs with a Trojan. UK Computer security firm Prevx found the Zeus crimeware available for just $4,000. The fee includes a DIY “exe builder” which incorporates a kernel-level rootkit. According to the Prevx this means it can hide from even the most advanced home or corporate security software. RSA detailed the capabilities of Zeus crimeware in 2008. Zeus also includes advanced “form injection capabilities” that allows it to change web pages displayed by websites as they are served on the user’s PC. For example, criminals can add an extra field or fields to a banking website asking for credit card numbers, social security numbers, etc. The bogus field makes it look like the bank is asking you for this data after you have logged on and you believe you are securely connected to your bank.

rb-

The reason for BSODing 100,000 machines isn’t quite clear. Several security experts have offered up their opinions including S21sec and Zeustracker (currently down due to an apparent DDOS). What is clear are the implications of this action.

Botnets and their related crimeware are dangerous for more and more reasons. They can steal massive amounts of personal data. They can launch denial-of-service attacks and they can execute code. I agree with Krebs that the scarier reality about malicious software is that these programs leave ultimate control over victim machines in the hands of the attacker.

Politically motivated attackers

For the time being, it is still in the best interests of the attackers to leave the compromised systems in place. They can plunder more information. However, imagine the social chaos created if 9 million PCs infected with Conflicker including hospitals from Utah to the UK were under the control of Al-Queda or other similarly minded groups. These politically motivated attackers could order all the infected machines to BSOD, creating computer-enhanced chaos. One of the forgotten lessons of 9-11 is that our technology can be hi-jacked and turned against us.  This could be the opening into a new type of cyber warfare.

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Cars Collaborate to Reduce Risks

Cars Collaborate to Reduce RisksAccording to InScience scientists and engineers from the National Center for Atmospheric Research (NCAR) tested an innovative technological system in the Detroit area in April 2009. The study will ultimately help protect cars and drivers from being surprised by black ice, fog, and other hazardous weather conditions.

The prototype system is designed to gather detailed information about weather and road conditions from moving cars. NCAR’s road weather system is part of IntelliDrive. IntelliDrive is a national initiative overseen by the Department of Transportation (DOT) to use new technologies to make driving safer and improve mobility.

The project included collecting information from 11 specially equipped cars in the Detroit area. Test drivers in Jeep Cherokee’sFord (F) Edge’s, and a Nissan Altima were on the prowl for adverse conditions. They sought out heavy rain and snow to collect, store and transmit data. The test vehicles used sensors to collect data about weather conditions such as temperature, pressure, and humidity.

on-board digital memory device recorded that informationAn on-board digital memory device recorded that information, along with indirect signs of road conditions. They recorded events like the cars windshield wipers being switched on or activation of the anti-lock braking system. The information was transmitted to a central database. There the information was integrated with other local weather data and traffic observations, as well as details about road material and alignment. The processed data will then be used to update motorists in the area when hazards are present and, when appropriate, suggest alternate routes. Engineers analyzed the reliability of the system by comparing data from the cars with other observations from radars and weather satellites.

Sheldon Drobot, the NCAR program manager in charge of the project told Inscience, “The system will tell drivers what they can expect to run into in the next few seconds and minutes, giving them a critical chance to slow down or take other action.”

Not only will the system provide motorist warnings/ It will alert emergency managers to hazardous driving conditions. The alerts would help state highway departments efficiently keep roads clear of snow. It can also help meteorologists refine their forecasts by providing them with continual updates about local weather conditions.

The tests helped the NCAR team refine its software to accurately process data from motor vehicles. “The results look very encouraging,” Drobot says. “The tests show that cars can indeed communicate critical information about weather conditions and road hazards.”

One of the biggest challenges for NCAR is how to process the enormous amounts of data that could be generated by about 300 million motor vehicles. “It’s not enough to process the information almost instantaneously,” says William Mahoney, who oversees the system’s development for NCAR. “It needs to be cleaned up, sent through a quality control process, blended with traditional weather data, and eventually delivered back to drivers who are counting on the system to accurately guide them through potentially dangerous conditions.

Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.