Tag Archive for INTC

VGA, DVI to Wane Over Next Five Years

VGA, DVI to Wane Over Next Five YearsThe venerable Video Graphics Array (VGA) port and it upstart cousin digital-visual interface (DVI) port will become extinct over the next five years. So says Brian O’Rourke, research director at NPD In-Stat in a recent report published on PCWorld. NPD In-Stat points out how new laptops today come with HDMI and DisplayPort for interfacing with HDTVs, monitors and projectors.

VGA VGAhas no upgrade path, and DVI has only gone through one minor upgrade cycle; in comparison, HDMI and DisplayPort are continuously being upgraded, according to O’Rourke. More importantly, chipmakers such Intel (INTC) and AMD (AMD) are phrasing out chipset support for VGA by 2015, while AMD has announced it will phase out chipset support for DVI by 2015. NPD In-Stat is forecasting shipments of devices with DVI, HDMI, and DisplayPort to pass 2 billion by 2015.

VGA’s long history stretching back to its introduction in 1986 makes it difficult to envision a world without it. Still, there have been ample signs of its impending obsolescence, such as the introduction of DVI and HDMI ports in mid-to-high-end displays in recent years.

HDMI portOf course, its forced retirement will mean that VGA will no longer be available as a fallback option for auditoriums and function rooms around the world. The presence of interface adapters can help, though businesses will probably need to give greater consideration to the presence of multiple interface support when acquiring new display devices or projectors.

rb-

OF course, the move to HDMI is being driven by big media so they can implement their draconian vision of DRM, HDCP.

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

186Gbps Transfer Sets Real-World Speed Record

186Gbps Transfer Sets a Real-World Fiber Speed RecordResearchers have set a new world record for data transfer. The new record was set at the SuperComputing 2011 (SC11) conference in Seattle, Washington. PhysOrg.com reports the international team set the speed record when they transferred 186 gigabits per second (Gbps) of data across 134 miles of an optical network for 11 hours.

Commercially available circuits

SuperComputing 2011The record-setting connection used a commercially available 100 Gbps circuit. The circuit was set up by Canada’s Advanced Research and Innovation Network (CANARIE) and BCNET, a non-profit, shared IT services organization. PhysOrg says the team was able to reach transfer rates of 98 Gbps between the University of Victoria Computing Center in Victoria, BC, and the Washington State Convention Center in Seattle. With a simultaneous data rate of 88 Gbps in the opposite direction, the team reached a sustained two-way data rate of 186 Gbps between two data centers. This broke the team’s previous peak-rate record of 119 Gbps set in 2009.

California Institute of Technology (Caltech) led the team of high-energy physicists, computer scientists, and network engineers from the University of Victoria, the University of Michigan, the European Center for Nuclear Research (CERN), and other partners.

transport large quantities of data across global networks of optical fibersAccording to PhysOrg, the achievement will help set up new ways to transport increasingly large quantities of data. More and more data traverse continents and oceans via global networks of optical fibers. The next generation of network technology needs new methods to transfer rates of 40 and 100 Gbps—that will be built in the next couple of years.

Our group and its partners are showing how massive amounts of data will be handled and transported in the future,” Harvey Newman, professor of physics and head of the high-energy physics (HEP) team told PhysOrg. “Having these tools in our hands allows us to engage in realizable visions others do not have.”

“The 100 Gbps demonstration at SC11 is pushing the limits of network technology by showing that it is possible to transfer petascale particle physics data in a matter of hours to anywhere around the world,” adds Randall Sobie, a research scientist at the Institute of Particle Physics in Canada and team member told PhysOrg.

The speed record equipment was not sexy

memorex guyExtremeTech points out that the achievement is quite significant. It is significant because the scientists used a commercially available 100 Gbps link and not “over private networks under laboratory/testbed conditions.” The equipment was not particularly sexy either. ExtremeTech lists Dell (DELL) servers with Intel (INTC) Sandy Bridge-based server motherboards with PCIe 2.0 and 3.0 solid-state drives. They used 10 and 40 Gbps LAN connections, and Force10 Z9000 and Brocade (BRCD) MLXe-4 switch-routers. The gear was able to achieve a disk to disk transfer rate of 60 Gbps, around 7.5 gigabytes per second. The 186 Gbps record was a memory-to-memory transfer between the servers. The max per-computer speed was 35 Gbps. Tested.com calculates that 4.42 petabytes traveled across the network during the transfer test.

rb-

So why does anyone need to move two million gigabytes per day? This is fast enough to transfer nearly 100,000 full Blu-ray disks—each with a complete movie and all the extras—in a day.

CERN needs faster transfer rates. CERN needs to move the huge amounts of data coming from the Large Hadron Collider (LHC). The LHC has already generated more than 100 petabytes of data. The data is processed, distributed, and analyzed at 300 computing and storage facilities at laboratories and universities around the world. Scientists believe the data volume will rise a thousand-fold as physicists crank up the collision rates and energies at the LHC in their attempt to cause the end of the world (Not)

FierceTelecom predicts that service providers will deploy 100Gig when the price of 100Gig is double the price of 40Gig. They believe that will take place in 2013.

This massive amount of bandwidth running on commodity Internet pipes with available hardware seems to spit in the eye of current bandwidth providers who can’t seem to provide a 10 Mbps circuit reliably.

Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Data Centers To Go Wireless

Data Centers To Go Wireless

MIT’s Technology Review reports researchers from IBM (IBM), Intel (INTC), and the University of California, Santa Barbara have come up with a way to improve data transmission in data centers. Heather Zheng, associate professor of computer science at UCSB who led the research says wireless is the answer to the in-rack cabling mess usually found in data centers. In their paper (PDF), the researchers say that transmitting data wirelessly within a data center would be simpler than rewiring data for tech titans like Google (GOOG), Facebook, or Twitter.

Line-of-sight connections

WiFi radio wavesThe earlier challenge for multi-gigabit wireless in the data center was it required a line-of-sight connection to be useful. Achieving the required data center speed could not happen in the maze of metal racks, HVAC ducts, and electrical conduits that make up most data centers.

TR reports that the researcher’s solution is to bounce 60-gigahertz Wi-Fi signals off the ceiling, which could boost data transmission speeds by 30 percent. Stacey Higginbotham at GigaOm points out that this could result in data transfers up to 500 Gigabits per second. She says current Ethernet cables in data centers are generally 1, 10, or maybe 40 gigabits per second.

60-gigahertz Wi-Fi for servers

Data center ceiling WiFiMs. Zheng and colleagues used 60-gigahertz Wi-Fi, which has a bandwidth in the gigabits-per-second range and was developed for high-definition wireless communications according to TR. However, it has its limitations, says Ms. Zheng. To maximize the bandwidth and reduce interference between signals, it needs to use 3D beamforming to focus the beams in a direct line of sight between endpoints. “Any obstacle larger than 2.5 millimeters can block the signal,” she says in the TR article.

One way to prevent the antennas from blocking each other would be to allow them to communicate only with their immediate neighbors, creating a type of mesh network. But that would further complicate efforts to route the data to the proper destinations, Professor Zheng told TR. Bouncing the beams off the ceiling directly to their targets not only ensures direct point-to-point communication between antennas but also reduces the chances that any two beams will cross and cause interference. “That’s very important when you have a high density of signals,” she says.

Flat metal plates placed on the ceiling offer near-perfect reflection. “You also need an absorber material on the rack to make sure the signal doesn’t bounce back up,” says Ms. Zheng.

Wireless can add 0.5 terabytes per second

Data centerAccording to Technology Review, the UCSB team worked with Lei Yang from Intel Labs in Oregon and Weile Zhang at Jiao Tong University in Xi’an, China, to simulate a 160-rack data center to see how the system might work. “Our simulation shows that wireless can add 0.5 terabytes per second,” she says.

IBM is also looking into using wireless technology in data centers, Scott Reynolds, a researcher at IBM’s T.J. Watson Research Center in Yorktown Heights, NY, who has been developing 60-gigahertz systems told TR. “These data centers are just choked with cables,” he says. “And so every time you want to reconfigure one it’s very labor-intensive and expensive.” But one problem with turning to wireless transmission, he adds, is that “you need to have hundreds of these wireless data links operating in a data center to be useful.” Since 60-gigahertz Wi-Fi has only four data channels, it’s important to configure the beams so they don’t interfere with each other.

Mark Thiele, the EVP of data center technology at Switch CommunicationsSuperNAP data center, told GigaOm that the research is worth following as low-latency networking inside the data center can be a bottleneck today for applications that range from financial trading to trying to move gigantic data sets around.

TR reports Ms. Zheng and her colleagues are now working on building a prototype data center to put their solution into practice.

rb-

Cable mess under a raised floorHaving just done a small data center cleanup, the idea is appealing. We pulled out 2 generations of cabling, IBM Type 1, and a bunch of Cat 3 multi-pair out from under the deck.

Ms. Higginbotham says the choice of 60 GHz for the data center is a smart move. Intel is pushing 60GHz for consumer use, under the WiGig brand (I wrote about WiGig in 2010 here). This means the chips would be cheap.

Some of the possible security issues raised by running Wi-Fi in the data center are tempered by using the 60Ghz range. She says if you are worried about someone standing outside the data center trying to eavesdrop on the data you are transmitting the 60Ghz, signals deteriorate rapidly.

Of course, change is hard and data center guys are going to have to learn wireless and top-of-rack switches would have to get radio cards installed. The Wi-Fi reflective panels would have to be installed on the ceiling of the data center and the servers would need a signal-absorbing surface so the Wi-Fi signals don’t continually bounce around the data center.

Just if you are confused about WiGig, Wi-Fi, and IEEE, EETimes says, “WiGig forged a deal with the Wi-Fi Alliance so its 60 GHz approach can be certified as a future generation of Wi-Fi. The group has aligned its technical approach with the existing IEEE 802.11ad standards effort on 60 GHz.”

Now if only they could do wireless electricity……..

Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Too Late for Cisco to Take on Apple?

Too Late for Cisco to Take on Apple?Chronically under-performing Cisco is finally getting into the iPad tablet market. Cisco (CSCO) will be releasing Cius in July. Technology Review reports that Cisco’s Cius, is bulkier than the iPad, and has a smaller screen (7-inches wide, compared to the iPad’s 9.7). But it packs a number of tricks designed to woo business users.

Cisco logoTested.com says the Cius can connect to a Cisco phone network to port calls from a desk number to the tablet in order to make a user’s desk number mobile. This will enable a person to make and receive voice and video calls anywhere. The tablet features HD quality cameras front and back and can be used with a Bluetooth headset for more private calling.

The tablet can also be used as a desktop videoconferencing device when docked on a special desktop phone, and can smoothly switch between a WiFi a cellular network connection. The Cius can be docked to serve as a videoconferencing device. The dock supports a keyboard and mouse, so the Cius really can serve as a little computer, “It can replace my desktop operating system,” says Tom Puorro, senior director for Cisco’s collaboration technologies told Technology Review.

Tested.com says the tablet runs Google‘s (GOOG) Android 2.2 Froyo on an Intel (INTC) Z650 1.6GHz Atom chip and weighs 1.5 pounds despite its small 7” screen. Tested.com speculates that Cisco has heavily modified the open-source Android to support business-centric features like multi-person videoconferencing and virtual desktop software.

Engadget has a video demo of the product here.

The fully skinned Android tablet seems like a relic of 2010 thanks to the arrival of Honeycomb, a version of Android actually built for tablets–which the Cius isn’t running. Tested.com says Cisco plans to upgrade the tablet to Android Ice Cream Sandwich eventually, but for now, it’s slumming around with version 2.2 (Froyo). Cisco probably spent too much time developing its custom skin and software to upgrade to Android version 2.3 (Gingerbread) or version 3.0 (Honeycomb).

Cisu runs on AndroidCisco has also created its own app store, AppHQ, that has only apps deemed stable and secure by Cisco and segregated it from the Android app market. This gives the IT department greater control over what a Cius user can do. IT managers can shut down access to the Android app market to protect a company from malicious apps according to Technology Review. Companies can even create their own app store within AppHQ and limit employees to certain applications, or apps built in-house.

Cisco has demonstrated a Cius virtual desktop that runs in the cloud and makes use of a dedicated chip in the tablet that encrypts all its data says Technology Review

A Wi-Fi-only version of the tablet will be available worldwide from July 31 at an estimated price of $750. Cisco will sell it along with related services and infrastructure, so the cost to businesses will vary, and could be as low as $650. AT&T and Verizon will each offer versions for their 3G and 4G networks this fall.

rb-

I wrote about the Cius here and don’t think it is an Apple Killer. Cisco will give its big partners a deal, but Cius also depends on an existing Cisco telephony infrastructure. I don’t see the Cius fitting in the Cisco product line-up since they jettisoned the Flip and are reportedly shopping Linksys and WebEx. The built-in virtual desktop looks pretty cool, though.

What do you think?

Can the Cisco Cius knock off the Apple iPad?

Does the Cius make sense in the non-consumer Cisco?

Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Tech World Financial Results

FMoney Makes the Tech World Gou Roundoxconn, Microsoft and Intel just reported financial results, and things look different. Apple is more profitable than Microsoft, MSFT’s most profitable division are toys and Intel says server growth for the mobile web is driving its growth.

Foxconn financial results

Foxconn financial results Jump in 2010the world’s manufacturer of all things tech recently posted its latest earnings report. TechEye points out that despite inconveniences like having to pay workers a slightly larger pittance and give them better working conditions, Foxconn has announced a 53% rise in consolidated revenues for 2010. Terry Gou‘s company’s gross profit for the twelve months increased by 58.5% to NT$100.9 billion from NT$63.6 billion in 2009.

Digitimes says the figures are all better than market watchers’ forecasts. Market watchers originally expected rising labor and component costs would seriously impact Foxconn’s profitability in 2010, but the company’s strong revenues last year still managed to boost its overall profitability despite a drop of 1.37 percentage points in its gross margin from the 2009 level to 8.15%.

Microsoft

Windows Sales Down Microsoft Profits Up 31%Microsoft’s (MSFT) profits grew 3% during its fiscal 3rd quarter ending March 31, 2011. During this period, the software giant racked up $5.23 billion in profits, while revenues reached $16.43 billion, a 13 percent climb. These profits came thanks to strong performance from some nontraditional divisions.

MSFT’s Entertainment and Devices Division provided the biggest revenue gain. The home of Xbox and Kinect, Ballmer’s boys motion-sensing game controller increased sales by 60 percent to $1.94 billion.  This is the smallest of Microsoft’s product divisions so it only generated 11.8 percent of overall sales. According to CNET. Kinect drove sales, selling 2.4 million units in the quarter according to the New York Times. CNET reports the company sold 2.7 million Xbox 360 consoles in the quarter, a 79 percent increase from last year.

Microsoft‘s second-largest revenue generator this quarter was the Windows and Windows Live Division which had revenue of $4.45 billion. This represents a 4 percent decrease from last year’s $4.65 billion and net income fell 10 percent. According to CNET Redmond says Windows is the fastest-selling operating system in history with 350 million licenses sold.

The Server and Tools Division saw the next best performance. The home to Windows Server had sales of $4.1 billion, up 11 percent from a year ago. Profit for the unit climbed 12 percent. CNET says business adoption of Windows Server, SQL Server and System Center lifted the division’s results.

At the Business Division, home of Office, Microsoft’s revenue grew 21 percent from last year according to the NYT. The NYT says the company’s Office software has no significant competition revenue grew to $5.25 billion. Office 2010 is the fastest-selling version of Office ever, Microsoft said, with businesses deploying the software at five times the rate of its predecessor.

Microsoft’s smallest revenue generator the Online Services Division, home of Bing gained 14 percent in revenue to $648 million from $566 million.TechEye reports that Bing increased its share of the search market but Microsoft spent so much on promotion the division saw operating losses of over $700 million. Ballmer’s partners are not happy with these results.  Two years ago, Microsoft and Yahoo inked a deal to use MSFT technologies for Yahoo’s search to help both fight off rival Google. However, Yahoo’s chief executive, Carol A. Bartz, said that the partnership had not yielded the expected financial results for Yahoo and that technical glitches by Microsoft were to blame according to the NYT.

Intel

Chip giant Intel (INTC) has finally found a way into the mobile market. After years of trying to get its Atom chips into mobile devices, they are profiting from the demand for servers to feed the mobile devices. Intel Chief Financial Officer Stacy Smith told Bloomberg that the spread of mobile devices fuels “explosive” growth for processors used in data centers. “There’s a significant, maybe even an insatiable, demand driver for more and more performance and computing power that’s moving into the cloud,” Mr. Smith told Bloomberg. “What gets lost is the explosive growth of all of these devices connecting to the Internet is driving a $10 billion dollar server business.” Intel recently reported that its second-quarter revenue will be $1 billion more than analysts had estimated, in part driven by the data center boom.

Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.