Tag Archive for 2009

IPv4 Addresses Drying Up In China

IPv4 Addresses Drying Up In ChinaOver at ChinaTechNews, they are reporting that China may soon run out of IP addresses. According to the China Internet Network Information Center (CNIC), under the current allocation speed, China’s IPv4 address resources can only meet the demand of 830 more days, which means about January 01, 2011. Li Kai, director in charge of the IP business for CNNIC’s international department, says that new IPv6 network addresses are only used among educational networks in China.

RB-

So apparently, China isn’t much further along with IPv6 deployment than Europe (GEANT) and North America (Internet2), where the research/educational community primarily has large IPv6 networks

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

HDTV over Wi-Fi

HDTV over Wi-FiTelephonyOnline has an article speculating that wireless high definition television will be available this summer. Celeno Communications, an Israeli start-up backed by Cisco, manufactures Wi-Fi chips. Their semiconductors can make Wi-Fi networks robust enough to deliver multiple high-definition television (HDTV) streams to PCs, TV’s or other consumer electronics devices. Celeno’s technology would deliver on a significant part of the anywhere, anytime video promise.

Celeno’s OptimizAIR technology will work with existing receivers such as set-top boxes, uses the 5 GHz spectrum. OptimizAIR uses standard PHY and MAC layers. It uses proprietary algorithms that the company says can double the throughput of standard 802.11 Wi-Fi. It can also increase the range of the Wi-Fi signals as much as eight times. Celeno’s technology additions include Spatial Channel Awareness and Beam-Forming MIMO (multiple inputs, multiple outputs). The company said it can stream HD video 120 feet, through four brick walls and more than three floors.

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Costs of Data Breach is Increasing

Costs of Data Breach's IncreasingThe annual Cost of Data Breach survey conducted by the Traverse City, MI-based Ponemon Institute and funded by encryption vendor PGP Corp. found the total average costs associated with data breaches rose slightly since 2007.

The fourth annual U.S. Cost of a Data Breach Study (registration required) surveyed 43 firms that experienced a data breach and asked them to give estimates for their expenses. The total average costs of a data breach grew to $202 per record compromised, an increase of 2.3% since 2007 ($197 per record) and 11% compared to 2006 ($182 per record).

Depending on the size of the breach, costs could become astronomically expensive, said Dr. Larry Ponemon, chair and founder of The Ponemon Institute. Some in the privacy community have a view that people over time will become indifferent to a data breach notification. But the Ponemon breach found the costs associated with lost business continue to climb. The lost business now accounts for 69% of data breach costs, up from 65% in 2007.

“Our model suggests that people haven’t reached the point of indifference yet,” Ponemon said. “When people reach that point the cost of churn should decline, but our findings show the costs continue to creep up year by year.”

The survey also found many firms having trouble preventing data breaches. Of the firms surveyed, 84% said they experienced more than one breach, though the costs are higher for companies experiencing a breach for the first time. Per victim cost for a first-time data breach is $243 versus $192 for experienced companies.

“It’s impossible to create an environment where you cannot have a data breach,” Ponemon said. “Data breaches will probably continue even for the best of companies, but it’s how you detect it, how you respond to it, and how you manage the risk that matters most.”

Companies are fearful of malicious insiders getting access to sensitive data. The rising tide of layoffs as a result of the poor economy has put a focus on the insider threat. But insider negligence continued to play a major role in causing a data breach. More than 88% of all cases involved incidents of insiders mishandling data. Far fewer breaches were from malicious insiders. The Ponemon study found that the per victim cost for data breaches involving negligence cost $199 per record versus malicious acts costing $225 per record.

Fewer firms are investing in additional technologies. Encryption was the first technology implemented after a breach. Of the technology options, 44% of companies have expanded their use of encryption, the Ponemon survey found.

“One of the mistakes people make with encryption is they’ll go and encrypt a laptop and forget about thumb drives, email or FTP servers,” he said. “People are addressing some issues but not addressing the entire problem.”

Some companies turn to the use of third-party services to handle personal information such as payment transactions and customer loyalty programs. But the Ponemon survey found that those services may increase the risk of data leakage and increase the cost of a breach. Breaches by outsourcers, contractors, consultants and business partners were reported by 44% of respondents, up from 40% in 2007. Third-party vendors often take more time to investigate and conduct forensic analysis. Services sometimes lose information due to poor processes or inadequate data protection technologies, Ponemon said.

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

IBM Resurrects Broadband over Powerline

IBM Resurrects Broadband over PowerlineA NetworkWorld article proves that where there is money to be taken from the Federal Government, Never Say Never Again. According to the article, IBM (IBM) has started building out broadband over powerline (BPL) networks. The company says BPL could offer broadband connectivity to 200,000 people living in rural areas.

IBM is building out the BPL networksIBM is building out the Broadband over Powerline networks as part of a $9.6 million deal with International Broadband Electric Communications (IBEC). In 2008, IBM inked a deal with the Alabama-based broadband provider to expand broadband access to people living in rural areas. The companies plan to deploy BPL networks to serve areas that only have access to dial-up services. The BPL will be delivered through seven electric cooperatives in Virginia, Michigan, Alabama, and Indiana. Once working, IBEC will serve as the cooperatives’ official ISP.

Broadband over Powerline in Michigan

Bob Hance, CEO of Michigan-based Midwest Energy Cooperative, says his company decided to take part in the BPL network program after a customer survey. The survey results, Mr. Hance says, were overwhelmingly in favor of signing up for the broadband program. Within a week, the cooperative had a waiting list of 4,000 customers practically pleading for service. “We were amazed by the responses to the survey — thousands of letters from citizens of our community expressing their need for broadband in order to improve everything from childhood education to the future of their family-owned small businesses,” said Mr. Hance.

We shared nearly 600 of these letters with local legislators after we realized none of the major service providers were going to answer their calls for help. Thanks to the help of those legislators, IBM and IBEC were able to access the resources needed to help our community. In less than two weeks, we’ve already deployed 400 live miles with broadband access, or nearly 4,000 homes.” according to a 02-19-09 press release from IBM and IBEC.

Electric companies’ benefits

IBM says in addition to bringing broadband connectivity to under-served areas, the new BPL connectivity will benefit electric companies. The BPL rollout will increase electric companies’ ability to monitor, manage and control the reliability of their electrical grids. Currently, electric cooperatives serve roughly 12% of the population in the United States and provide about 45% of the electrical grid. The give-away American Recovery and Reinvestment Act of 2009 include $11 billion to be spent on “smart grid” systems to monitor and manage the nation’s electrical network.

Government handoutrb-

Of course, I may be overly cynical if I question the timing of the IBM announcement. It happened just 24 hours after the $787 billion give-away American Recovery and Reinvestment Act of 2009 was signed by President Obama. In case you didn’t find the five pages entitled Division B— Title VI–Broadband Technology Opportunities Program (pages 398-402 of 407 pages) they authorize the $7.2 billion to give-away stimulate the expansion of broadband networks into rural and underdeveloped areas in the country.

BPL so far has not caught on as a broadband technology in the United States. As of May 2008, there were only 4,776 people in the United States subscribed to broadband over powerline.

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Terabit Ethernet

Terabit EthernetOver at The Register, there is an article heralding the coming of Terabit Ethernet. Apparently, researchers from Australia, China, and Denmark think they have opened the door to terabit per second Ethernet links using multiplexed 10Gbit/s data streams and small chalcogenide demux chips to demultiplex the 10 gig streams.

In the paper, entertainingly entitled Breakthrough switching speed with an all-optical chalcogenide glass chip: 640 Gbit/s demultiplexing, the researchers describe how injecting multiple 10gig data streams into optical cables is not a problem using existing optical technology (electro-optic modulator per stream) and optical time-division multiplexing (OTDM).

Recombining the data streams

The obstacle has been recombining those separate data streams at the end of the link and doing it fast enough. Apparently despite the recent hype about 40Gb Ethernet, the receiving and recombination of these streams is a problem at output rates higher than 40Gbits according to the research paper published in Optics Express, Vol. 17, Issue 4, on February 16th.

Until now the re-combination has been carried out using photo-detectors that can operate up to 40 Gbit/s or so. That limits us to just four 10gig streams. Achieving higher data rates this way means we have to send more parallel data streams down the cable and demultiplex – switch or recombine them – into one data stream faster still. This latest research uses waveguides just 5cm long by making them from chalcogenide glass chips with switching speeds measured in femtoseconds, a billionth of a millionth of a second, or a quadrillionth.

The researchers conclude that their test results confirm the enormous potential of chalcogenide-based waveguides for ultrafast optical signal processing.

They believe their technology can be extended to demultiplex 100 10Gbit/s data streams and so achieve a terabit Ethernet capability. The article points out that commercialization of such technology is, of course, if it takes place at all, many years away.

rb-

Seems like its time to add another synonym for huge to our vocabulary petabyte, exabyte, zettabyte

Some thoughts from Bob Metcalfe on TB Ethernet

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.