Tag Archive for Internet service provider

Open Source Wireless for Detroit

Open Source Wireless for DetroitDetroit is the proving grounds for a new open source wireless network technology called Commotion. According to FierceWireless, Commotion is a new wireless mesh-networking platform being deployed across Detroit by the New America Foundation’s Open Technology Institute (OTI).

DetroitThe OTI has completed the first phase of construction of the wireless testbed in Detroit’s Cass Corridor, where Commotion connects low-income apartment buildings, community centers, churches, and businesses. FierceWireless says the prototype open-source network allows neighbors to communicate with one another and can potentially distribute Internet access to local residents, the group says. “The Detroit wireless network … will put control of the Internet into the hands of its users,” said OTI Director Sascha Meinrath. “The partners OTI works with in Detroit are not only self-provisioning connectivity for local residents, they’re proofing out technologies that support free, safe, ubiquitous communications around the globe.”

put control of the Internet into the hands of its usersStacey Higginbotham at GigaOM reports the new stack has technologies such as Serval, which would enable the handsets to recognize the Commotion network, Tor, a program that can hide where a user is coming from and OpenBTS, an open source base station that runs software that can interface between VoIP networks and GSM radios.

The OTI release on the news notes that more than half of Detroit residents do not have Internet service at home due to the cost of service and a lack of investment in infrastructure by Internet service corporations.

GigaOM also notes that the public release of Commotion follows a funding round for a company called Open Garden, which is pursuing a similar mesh network creation software. Meanwhile, Range Networks has been formed to support the OpenBTS standard and deliver a “network in a box” that runs the OpenBTS software and allows users to make voice calls anywhere in the world.

rb-

fed's are using Detroit as a proving ground for technologies designed to help take down dictatorshipsAm I the only one that sees the irony that the Fed’s are using Detroit as a proving ground for technologies designed to help take down dictatorships? According to the OTI press release, the U.S. Department of State is funding the Detroit Commotion project to test the potential of the technology in third world places like Egypt or Syria or Detroit.

Don’t worry, we are the government and we are here to help.

Do you think Open Source Wireless for Detroit will work?

Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Google, Facebook and Yahoo Test IPv6

Google, Facebook and Yahoo Test IPv6A global trial of IPv6 is scheduled for June 8th 2011. Google (GOOG), Facebook, Yahoo (YHOO), and Akamai (AKAM) will reportedly take part in the IPv6 “test flight.” The Internet Society, a non-profit group that educates people and companies about net issues is coordinating World IPv6 Day. Those who sign up for the test will make their pages available via IPv6 for 24 hours to help iron out problems created by the switch to the new addressing scheme.

IPv6 good news

Internet Society logo“By providing an opportunity for the internet industry to collaborate to test IPv6 readiness we expect to lay the groundwork for large-scale IPv6 adoption and help make IPv6 ready for prime time,” said Leslie Daigle, chief Internet technology officer at the Internet Society in a statement.

“The good news is that internet users don’t need to do anything special to prepare for World IPv6 Day,” said Lorenzo Colitti, a network engineer at Google in a blog post. “Our current measurements suggest that the majority (99.95%) of users will be unaffected. However, in rare cases, users may experience connectivity problems, often due to misconfigured or misbehaving home network devices.”

According to Google, Vint Cerf, the program manager for the ARPA Internet research project chose a 32-bit address format for an experiment in packet network interconnection in 1977. For more than 30 years, 32-bit addresses have served us well, but now the Internet is running out of space. IPv6 is the only long-term solution, but it has not yet been widely deployed.  In November 2010 Mr. Cerf, one of the driving forces behind Google’s IPv6 efforts warned that the net faced “turbulent times” if it did not move quickly to adopt IPv6.

rb-

Vint Cerf wants you t use IPv6It will be interesting to see the number of participants. This all may just blow over the top because not enough of the right people in organizations see the need. I spoke to my Boss about this a while ago and I think one phone call has been made to our upstream ISP to see what they are doing. We probably won’t deal with it until there is a need for a point-to-point IP video conference with China or something and when it won’t work, then it is a crisis that gets addressed.

Does your organization have a plan for IPv6 migration?

View Results

Loading ... Loading ...
Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

2009 SPAM results

2009 SPAM results PC World chronicles how analysts at the a California-based security company FireEye executed a plan to shut down the Mega-D (or Ozdok) botnet in early November 2009. At one point the Mega-D botnet reportedly accounted for 32 percent of all spam. In order to shut down this threat, Afit Mushtaq and two FireEye colleagues went after Mega-D’s command infrastructure.

According to the article, the botnet’s command infrastructure was its weak point. The Mega-D owned bots infesting PCs were directed from online command and control (C&C) servers throughout the world. If the bots could be separated from their controllers, the researchers found that the undirected bots would sit idle on the PC’s not delivering their malware. Mushtaq found that every Mega-D bot had been assigned a list of destinations to try if it couldn’t reach its primary command server.  Taking down Mega-D would need a carefully coordinated attack.

To coordinate the attach the FireEye team contacted the Internet Service Providers (ISP’s) that hosted Mega-D control servers. Mushtaq’s research showed that most of the Mega-D C&C servers were based in the United States, with others in Turkey and Israel. The FireEye team received cooperation for the U.S.-based IPS’s but not the overseas ISPs. The FireEye team took down the U.S.-based C&C servers.

Since the ISP’s in Israel and Turkey refused to cooperate, PC World reports that Mushtaq and company contacted domain-name registrars holding records for the domain names that Mega-D used for its control servers. The registrars collaborated with FireEye to point Mega-D’s existing domain names to no­­where. This cut off the botnet’s pool of domain names that the bots would use to reach the overseas ISP-based Mega-D C&C servers.

As the last step, PC World says that FireEye and the registrars worked to claim spare domain names that Mega-D’s controllers listed in the bots’ programming and pointed them to “sinkholes” (servers FireEye had set up to sit quietly and log efforts by Mega-D bots to check-in for orders). Using those logs, FireEye estimated that the botnet consisted of about 250,000 Mega-D-infected computers.

MessageLabs reports that Mega-D had “consistently been in the top 10 spam bots” for the earlier year. The botnet’s output fluctuated from day to day, but on November 1 Mega-D accounted for 11.8 percent of all spam that MessageLabs saw. Three days after FireEye’s operation, Mega-D’s share of Internet spam to less than 0.1 percent, MessageLabs states.

Mushtaq recognizes that FireEye’s successful offensive against Mega-D was just one battle in the war on malware. The criminals behind Mega-D may try to revive their botnet, he says, or they may abandon it and create a new one. But other botnets continue to thrive. “FireEye did have a major victory,” says Joe Stewart, director of malware research with SecureWorks in the PC World article, “The question is, will it have a long-term impact?”

Mushtaq says that FireEye is sharing its method with domestic and international law enforcement,  “we’re definitely looking to do this again,” Mushtaq says. “We want to show the bad guys that we’re not sleeping.”

rb-

The takedown of Mega-D by FireEye has had a noted decrease in the level of SPAM I observed. During the 10 months before the Mega-D takedown, the daily average of SPAM messages (DASM) received 49. After the November 2009 takedown, the DASM rate dropped to 33. A step down into the numbers reveals that the November 2009 DASM was 35 and the December DASM was 29.


The overall DASM trend line for 2009 was down. In order to keep the trend going down, firms should investigate the ShadowserverASN & Netblock Alerting & Reporting Service. This free reporting service is designed for organizations that directly own or control network space. The service provides reports detailing detected malicious activity to aid in their detection and mitigation program.  Shadowserver has provided this service for over two years and now generates over 4,000 reports nightly.  The reporting service monitors and alerts the following activity:

  • Detected Botnet Command and Control servers
  • Infected systems (drones)
  • DDoS attacks (source and victim)
  • Scans
  • Clickfraud
  • Compromised hosts
  • Proxies
  • Spam relays
  • Malicious software droppers and other related information.

Detected malicious activity on a subscriber’s network is flagged and included in daily summary reports detailing the previous 24 hours of activity. These customized reports are made freely available to the responsible network operators as a subscription service.

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Copper Sexy Again

Copper Sexy AgainThanks to the FCC‘s 100 squared plan for 100 million U.S. homes to have affordable access to download speeds of at least 100 Mbps and real upload speeds of at least 50 Mbps there, seems to be some renewed interest in copper. Both Bell Lab and AT&T have announced experiments to extend the useful life of copper infrastructure.

DSL linesAccording to Broadband Reports, Bell Labs, Alcatel-Lucent’s research arm has achieved speeds of 800 Mbps using a pair of traditional DSL lines. Reuters says that AT&T is going to trial 80 Mbps DSL this month. Broadband Reports says that Alcatel-Lucent (ALU) achieved the speeds during lab tests by combining three technologies.

First, AlcaLu uses a phantom circuit–a technique developed in 1886 to create virtual analog phone lines. The firm uses a second, supplementary pair of wires to create a third “phantom” channel to supplement the two physical wires common with DSL.

Alcatel-Lucent logoIn “phantom mode,” a digital signal is normally transmitted through two wires twisted together–one positive and the other negative. John J. Carty electrical engineer, telephony pioneer, and future president of ATT realized that it is possible to send a third signal on top of four wires separated into two twisted pairs. The negative half of this “phantom” connection is sent down one twisted pair (which is already carrying a conventional signal), and the positive half down is sent down another twisted pair. At the destination, analog processors are used to extract all three signals–two real and one “phantom”–from the two pairs.

The second component is bonding which treats multiple lines as if they were a single cable to increase the speed of DSL broadband connections by a multiple almost equal to the number of cables involved.  Finally vectoring is used on the third channel for error correction to cancel noise or “crosstalk” between adjacent copper wire pairs.

Stefaan Vanhastel, Director Product Marketing, Alcatel-Lucent Wireline Networks told Broadband Reports that “by using vectoring, which is a noise-canceling technology to eliminate noise” they can improve the performance of the copper lines. The lab tests showed that the technology is capable of offering 100 Mbps over 1,000 meters (3,820 feet). Alcatel-Lucent doesn’t believe it will roll out the combination technology until after 2011.

ATT logoDespite the focus on wireless broadband over at AT&T (T) they are trying to push the boundaries of its existing wireline copper plant to deliver broadband services. According to Reuters, beginning this month, AT&T is going to trial 80 Mbps DSL. This will surpass its top 24 Mbps speed. AT&T’s Seth Bloom told Broadband Reports the trial will look at “pair bonding, vectoring, (and) spectrum management,” which “can be done very inexpensively and on a per-user basis.” AT&T’s experiment will be limited by the quality of existing copper facilities and the distance the end-user is from either the CO or the remote terminal (RT) cabinet The U-verse end-user won’t get all that bandwidth because it also has to carry bandwidth-hungry HDTV signals.

An interesting wrinkle in AT&T’s 80 Mbps test is that Alcatel-Lucent, which is demonstrating 300 Mbps supplies the VDSL2 access gear to AT&T but hasn’t yet shipped access gear that can bond VDSL2 because CPE vendors haven’t done so, an official said. “We will have VDSL2 bonding-ready equipment going into production soon, and we will add the bonding software to the equipment once the CPE for VDSL2 bonding is available.” according to ConnectedPlanet.

rb-

Clearly, the incumbent telcos are feeling the pressure from the cablecos DOCSIS 3.0 rollouts. The Alcatel-Lucent 300 Mbps VDSL2  technology should be scooped up by incumbent telcos who need to squeeze a couple more years out of their thousands of miles of copper wireline last mile and keep a hand in the FCC’s 100 Mbps broadband plan.

In the enterprise space, the improved DSL technology may cut into the optical cable business by reducing the long-term cost-effective argument for private fiber. That is of course if you can get the service. All of the “improved DSL” services need more copper pairs, which may not be available. This of course has to be balanced against increasing your exposure to AT&T.

Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Broadband is a Civil Right?

Broadband is a Civil Right?According to former Federal Communications Commission commissioner Michael Copps, American’s civil rights should be expanded to include broadband access. Mr. Copps stated at a July 21, 2008 speech at Carnegie Mellon University, “No matter who you are, or where you live, or how much money you make … you will need, and you are entitled to have these tools (broadband Internet) available to you, I think, as a civil right.”(download from http://hraunfoss.fcc.gov/edocs_public/attachmatch/DOC-283886A1.pdf).

rb-

Ubiquitous broadband is a good thing, perhaps even a lofty political goal and an economic driver. However, I have a hard time figuring out where to place freedom to surf. I wonder where Mr. Copps will place this new civil right, maybe it will be life, liberty, and the pursuit of broadband access (sorry Mr. Jefferson your ideas are just so 18th century).

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.