Featured Posts

<< >>

Master Email for Business Efficiency

Discover how mastering email communication can boost business efficiency, avoid common pitfalls, and ensure secure, respectful online interactions.

Turkey Revenge

The turkeys are pissed this Thanksgiving they are seeking revenge.

Germs Infest 60% of Americas Phones

60% of Americans sleep with their phones, harboring germs. Cleaning regularly with UV sanitizer or alcohol wipes can help keep your phone and bed germ-free.

Smartphone Sanitizing: A Practical Guide

Securely erase personal data from your old smartphone before recycling. Protect your identity from hackers—easy steps to follow.

Why Soft Skills Matter in Today’s Job Market

Boost your career with essential soft skills like communication, teamwork, and emotional intelligence. Learn why they’re crucial for workplace success.

Server Management Security Hole

Server Management Security HoleDan Farmer, security researcher and creator of the SATAN vulnerability scanner, teamed up with HD Moore, chief research officer at Rapid7 and lead architect of the Metasploit penetration testing framework found 230,000 publicly accessible Out-Of-Band management interfaces on the Internet. Many of these systems were running software that dates back to 2001.

Out-Of-Band server management

Out-Of-Band (OOB) managementAccording to PCWorld, the Out-Of-Band (OOB) management interfaces expose servers to the Internet through microcontrollers embedded into the motherboard that run independently of the main OS and provide monitoring and administration functions. These microcontrollers are called Baseboard Management Controllers (BMCs). BMC’s are part of the Intelligent Platform Management Interface (IPMI), a standardized interface made up of a variety of sensors and controllers that allow administrators to manage servers remotely when they’re shut down or unresponsive, but are still connected to the power supply.

BMCs are embedded systems that have their own firmware—usually based on Linux. It’s an OS-agnostic and pervasive protocol. Initially developed by Intel (INTC), Dell (DELL), HP (HPQ), and other large equipment manufacturers. It was designed to help manage OOB or Lights-Out communication.

Rebranded by OEM manufacturers

Lights-Out communicationPure IPMI is usually implemented as a network service that runs on UDP port 623. It can either piggyback on the server’s network port or may use a dedicated Ethernet port. Vendors take IPMI as a base and add on a variety of services like mail, SNMP, and Web GUIs, and then rebrand the new package:

  • Dell has iDRAC,
  • Hewlett Packard iLO,
  • IBM (IBM) IMM2

It’s also used as the engine for higher-level protocols. Some of the protocols are put out by the DMTF (WBEM, CIM, etc.) the OpenStack Foundation, and others. IPMI is particularly popular for large-scale provisioning, roll-outs, remote troubleshooting, and console access according to the research paper.

Parasitic oversight

complete control and oversight on of the serverThe parasitic BMC has near-complete control and oversight of the server it rides upon. It can control the server’s including its memory, networking, and storage media. It can not be truly turned off. Instead, it runs continuously unless the power cord is completely pulled. An owner may only temporarily disable outside interaction unless you take a hammer to the motherboard.

Security researchers have warned in the past that most IPMI implementations suffer from architectural insecurities and other vulnerabilities/ These can be exploited to gain administrative access to BMCs. If attackers control the BMC they can mount attacks against the server’s OS as well as other servers from the same management group.

Dan Farmer stated in his recent paper Sold Down the River (PDF).

For over a decade major server manufacturers have harmed their customers by shipping servers that are vulnerable by default, with a management protocol that is insecure by design, and with little to no documentation about how to make things better … These vendors have not only gone out of their way to make their offerings difficult to understand or audit but also neglected to supply any substantial defense tools or helpful security controls.

Old BMC software

Remote managementMr. Farmer and Mr. Moore ran scans on the Internet in May 2014 and identified 230,000 publicly accessible BMCs. A deeper analysis of the at-risk systems revealed:

  • 46.8% of them were running IPMI version 1.5, which dates back to 2001,
  • 53.2% were running IPMI version 2.0, which was released in 2004.

The researchers reported that nearly all the systems running IPMI v1.5 were configured so that all accounts could be logged into without authentication. … you can login to pretty much any older IPMI system without an account or a password.” Mr. Farmer explains this set-up can grant an attacker privileged access, “… in most cases, they grant administrative access, and even when they don’t the mere ability to execute any kind of commands without authentication is a bad thing.

architectural insecurities that can be exploitedThe team found that IPMI v.2.0, which includes cryptographic protection has its own security issues. For example, the first cipher option, known as cipher zero, provides no authentication, integrity, or confidentiality protection, Farmer said. A valid user name is required for logging in, without a password. The researcher found that around 60% of the publicly accessible BMCs running IPMI version 2 had this vulnerability.

Server management issues in IPMI 2.0

Another serious issue introduced by IPMI 2.0 stems from its RAKP key-exchange protocol that’s used when negotiating secure connections. The protocol allows an anonymous user to obtain password hashes associated with any accounts on the BMC, as long as the account names are known.

“This is an astonishingly bad design, because it allows an attacker to grab your password’s hash and do offline password cracking with as many resources as desired to throw at the problem,” Farmer said.

The analysis showed that 83% of the identified BMCs were vulnerable to this issue. A test with brute-force password guessing application John the Ripper, using a modest 4.7 million-word dictionary successfully cracked 30% of the BMC passwords. Farmer calculated that between 72.8 and 92.5% depending on password cracking success rate, of BMCs running IPMI 2.0 had authentication issues and were vulnerable to unauthorized access.

Canary in the coal mine

While a quarter of a million BMCs is only a tiny sliver of the total computing power in the world, it’s still an important indicator as a kind of canary in the coal mine,” Mr. Farmer warns. He predicts that BMCs behind corporate firewalls share the same issues. He said. “While management systems are often not directly assailable from the outside they’re often left open once the outer thin hard candy shell of an organization is breached.

The research paper includes recommendations for server administrators on how to mitigate some of the identified issues and better secure their BMCs. But the researcher concludes that ultimately the problem of insecure IPMI implementations will linger on for a long time. Mr. Farmer concludes with a rant:

Many of these problems would have been easy to fix if the IPMI protocol had undergone a serious security review or if the developers of modern BMCs had spent a little more effort in hardening their products and giving their customers the tools to secure their servers … At this point, it is far too late to effect meaningful change. The sheer number of servers that include a vulnerable BMC will guarantee that IPMI vulnerabilities and insecure configurations will continue to be a problem for years to come.

rb-
They told us so, about a year ago.

Defense-in-depth, block UDP port 623 at the perimeter – yes all of them, on the end-points, you are using personal firewalls?

Disable or remove the default vendor user names and pick a strong UID and PWD

Least privilege, the researchers warn that anyone who has administrative privileges on a BMC’s server has administrative control over it and may disable or enable IPMI, add or remove accounts, change the IP address, etc., etc.–all without any authentication to the BMC.

Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Autotask Sold

Autotask SoldRedmond Channel Partner is reporting that Vista Equity Partners is acquiring Autotask Corp. RCP says Autotask is one of the most significant vendors for managed services providers. The article reports the private equity firm is buying Autotask for an undisclosed sum. Vista’s $11.5-billion portfolio includes Aptean, Websense, and at least 20 vertically focused technology companies. The announcement came during Autotask’s 2014 Community Live! show in Miami.

Autotask logoMark Cattini, president and CEO of Autotask, issued a statement to RCP, which says all the proper things, about aggressively improving Autotask’s solutions for customers.

We are devoted to our clients’ ongoing success and are confident that our partnership with Vista will drive innovation and growth and delivery dynamic solutions as the traditional IT landscape evolves.

Managed Service ProviderAlan Cline, principal at Vista Equity Partners, indicated that Autotask’s focus on IT service providers as core customers would continue. He also claimed the firm would help improve the product. He said in a statement to RCP  to “work with the Autotask team to expand and enhance the company’s solutions to help IT service providers more efficiently and effectively meet their client’s changing needs.”

The article claims this is just the latest step in the consolidation of the remote monitoring and management (RMM) market arena. RCP says this trend got rolling with a growth equity firm backing the 2011 spinoff of what eventually became Continuum from Zenith Infotech, followed by 2013’s private equity-funded acquisition and internal development spree at Kaseya, along with new owners for N-Able Technologies (SolarWinds) and Level Platforms Inc. (AVG Technologies).

rb-

FrustratedI have used the Autotask project module and IMHO it really needs help. My first beef is not fully with Autotask, rather it is with all SaaS-based applications, every time a task is updated, Autotask immediately sends the change thru the Inter-tubes and slows down any project planning to a crawl, especially when you are used to using Microsoft (MSFT) Project on a LAN.

Speaking of Project, Autotask has no way to directly import any of your existing mpp’s. The best that an Autotask “consultant” could do was have me export the mpp to an xls via Project and then import that into Autotask. Really?

There are not a lot of real-time tools in Autotask like Team Planner and Task Inspector.

All-in-all, the project piece of Autotask was a net loss. The new owners of Autotask have their work cut out for them if they are going to make their acquisition profitable.

Related articles
  • OpenDNS Integrates with Autotask to Centralize Security and Account Management for Partners (hispanicbusiness.com)

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Sears Converts Stores to Data Centers

-Updated 07-12-16- Data Center Frontier reports that Sears ultimately decided to spin off its Sears and Kmart stores as a real estate investment trust (REIT) rather than converting them into data centers.

Sears Converts Stores to Data CentersThe blinking blue lights of servers soon fill the aisles that previously offered the Blue Light Special according to an article in Data Center Knowledge by Rich MillerSears Holdings (SHLD) has formed a new unit to market space from former Sears and Kmart retail stores as a home for data centers, disaster recovery space, and wireless towers.

Ubiquity Critical EnvironmentsWith the creation of Ubiquity Critical Environments, Sears hopes to convert the retail icons of the 20th century into the Internet infrastructure to power the 21st-century digital economy. The article says Sears Holdings has one of the largest real estate portfolios in the country, with 3,200 properties spanning 25 million square feet of space. That includes dozens of closed Sears and Kmart stores. Sean Farney, the COO of the newly formed Ubiquity believes the firm has a great asset on its hands he told DCK.

It’s an amazing real estate portfolio … The goal is not to sell off properties. It’s to reposition the assets of this iconic brand. The big idea is that you have a technology platform laid atop a retail footprint, creating the possibility for a product with a very different look to it.

SearsCOO Farney is an industry veteran who previously managed Microsoft’s huge Chicago data center, and then ran a network of low-latency services for the financial services firm Interactive Data. He told DCK, he sees an opportunity to build three lines of businesses atop the Sears portfolio: data centers, disaster recovery sites and “communications colocation” in which Ubiquity leases rooftop space to wireless providers.

Ubiquity will be able to leverage real estate at both closed stores and some that are still operating, depending on the opportunity. The first step has been to evaluate the portfolio and identify properties that could work as data centers. The article reports that Chicago engineering firm ESD has conducted “data center fitness tests” on promising properties to size up their power, fiber, and risk profiles. Ubiquity is also working with Newmark Grubb Knight Frank to market the portfolio to the brokerage community.

Data centerThe first Ubiquity project will be a Sears store on the south side of Chicago, nestled alongside the Chicago Skyway. The 127,000 square foot store will be retrofitted as a multi-tenant data center. Ubiquity’s Farney says he already has a commitment for the first tenant at the site on East 79th Street, which has 5 megawatts of existing power capacity and the potential to expand. “It’s a building that’s lit very well, from both a fiber and power perspective,” Mr. Farney told the author. “It’s going to be great data center building.”

Mr. Farney acknowledges that many of Sears’ mall-based retail locations aren’t viable for data center usage. “I don’t think the industry is yet ready for a mall-based data center,” he said. “That may take some time. The stand-alone location is optimal.”

Cell towerUbiquity has those stand-alone facilities, along with distribution centers and some parcels of vacant land. ”There are closed Kmarts that are stand-alone, 200,000 square-foot properties with good fiber and power and 10 acres of parking,” said Mr. Farney. “These are owned assets.”

The article cites the COO who says Ubiquity has flexibility in how it works with tenants. It could finance a buildout and then hand over a wholesale data center to an enterprise or managed hosting provider or could opt for a powered shell solution for a tenant, depending on the customer’s needs.

After initially focusing solely on data centers, Ubiquity has expanded its strategy Mr. Miller explains. Although mall-based stores may not be right for data centers, they could be ideal for disaster recovery facilities, Mr. Farney said. That includes mall stores that have closed, as well as those that have downsized to a smaller retail footprint. In either scenario, a separate workspace could be created with an exterior entrance to restrict access, while still allowing employees to take advantage of nearby stores and eateries. Mr. Farney believes this makes sense for the client.

Disaster recovery sitesThere are compelling reasons why this is a great model … It used to be the business continuity centers were located in an industrial park. The customer has evolved to the point where they want a sexier location, where they can have access to a Starbucks and other retail, because it’s possible they may be there for weeks or months. Sears and Kmart stores are located in just such retail locations in major malls.

The COO also predicts that customers are ready for a more distributed approach to business continuity.

In the past, customers had a single monolithic recovery center … Now, after (Hurricane) Sandy, there’s a need for multiple locations, because you don’t be tied to one location in a regional disaster. There’s a desire to have multiple locations spread costs across multiple areas. The Sears footprint really fits that.

Then there’s wireless, which the article says is the most exciting opportunity. Mr. Farney says that seventy percent of the U.S. population lives within 10 miles of a Sears or Kmart store.

When malls were being built, they gravitated to the intersection of freeways and highways, and Sears got entry to all of them … These rooftops have proximity to the greatest mass of consumers available. As wireless users grow, the size of the cell is shrinking, creating holes in coverage. Having rooftop access to the cars and pedestrians around the malls is important. The Sears portfolio can capture that … There’s tons of interest. I will put as many of the rooftops in play as I can.

 rb-

This is a rather innovative and out-of-the-big-box thinking and smart use of space for a company with a huge real estate portfolio. 

Sears’ solution to the problem of now-vacant retail buildings isn’t to sell them off for scrap and hope for the best but to hang on to its assets and find a way to make them more profitable. Every struggling company and town in this country could learn a lesson from Sears.
Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

The Evolution of Backup

The Evolution of BackupHave you ever stopped to think about how the technology for data protection has evolved? Backup has been around, in one form or another, since 3000 B.C. It has evolved and adapted to take advantage of improvements in technology platforms. Storage vendor Axcient traces the evolution of backup technology from clay tablets to the cloud in this infographic.

Axcient traces the evolution of backup and key events in backup methods.

Axcient infographic the evolution of backup

According to CrunchBaseAxcient is an entirely new type of cloud platform. Their technology stack eliminates data loss, keeps applications up and running, and makes sure that IT infrastructures never go down.

Axcient is designed for today’s always-on business, The system replaces legacy backup, business continuity, and disaster recovery software and hardware. They claim it reduces the amount of expensive copy data in an organization by as much as 80%.

By mirroring an entire business in the cloud, Axcient makes it simple to access and restore data from any device. They claim that with a single click their app can configure failover systems, and virtualize your entire office – all from a single deduplicated copy.

rb-

The key to any successful Business Continuity Plan is a solid, verified backup plan. The impact of a major data loss on a SMB can be devastating. The actual numbers are debatable, however, it seems that a significant number of firms go out of business after a major data loss. 

There are many new ways to backup your data, from Acronis, Axcient, Barracuda (CUDA), EMC (EMC), ExagridHP (HPQ), IBM (IBM), Symantec (SYMC), Veem what is important is that you have a plan, execute it and test it. 

Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook and Twitter. Email the Bach Seat here.

70s Glitch Could Hit Every Computer On Earth

70s Glitch Could Hit Every Computer On The PlanetRebecca Borison at the BusinessInsider asks who remembers the 1999 panic about the Y2K crisis. In 1999, Y2K looked as if it might derail modern life when computers because the glitch would reset computers to Jan 1. 1900, rather than Jan. 1, 2000, because computers only used two digits to represent a year in their internal clocks.

déjà vu all over againNow it déjà vu all over again, BI reports there’s a new, even bigger global software coding fiasco looming.  A huge amount of computer software could fail around the year 2038 because of issues with the way the code that runs them measures time.

Once again, just like with Y2K every single piece of software and computer code on the planet must now be checked and updated again. That is not a trivial task according to the author. In 2000, we bypassed the Y2K problem by recoding the software explains Ms. Borison. All the software — a fantastically laborious retrospective global software patch.

Disruption to the tech industry

Y2K problemAlthough Y2K was not a disaster, it was a massive disruption to the tech industry at the time. Virtually every company on the planet running any type of software had to find their specific Y2K issue and hire someone to fix it. Ultimately, Y2K caused ordinary people very few problems — but that’s only because there was a huge expenditure of time and resources within the tech business.

The 2038 problem will affect software that uses what’s called a signed 32-bit integer for storing time. The problem arises because 32-bit software can only measure a maximum value of 2,147,483,647 seconds. This is the biggest number you can represent using a 32-bit system.

time is represented as a signed 32-bit integerWhen a bunch of engineers developed the first UNIX computer operating system in the 1970s, they arbitrarily decided that time would be represented as a signed 32-bit integer (or number), and be measured as the number of milliseconds since 12:00:00 a.m. on January 1, 1970.

Glitch says it’s 1970 again

On January 19, 2038 — 2,147,483,647 seconds after January 1, 1970 — these computer programs will exceed the maximum value of time expressible by a 32-bit system using a base 2 binary counting system, and any software that hasn’t been fixed will then wrap back around to zero, thinking that it’s 1970 again.

UNIX time coding has since been incorporated widely into any software or hardware system that needs to measure time.

BI spoke with Jonathan Smith, a Computer and Information Science professor at the University of Pennsylvania for confirmation. The professor confirmed the Year 2038 is a real problem that will affect a specific subset of software that counts on a clock progressing positively. He elaborated:

Most UNIX-based systems use a 32-bit clock that starts at the arbitrary date of 1/1/1970, so adding 68 years gives you a risk of overflow at 2038 … Timers could stop working, scheduled reminders might not occur (e.g., calendar appointments), scheduled updates or backups might not occur, billing intervals might not be calculated correctly

The article concludes that we all need just to switch to higher bit values like 64 bits, which will give a higher maximum. In the last few years, more personal computers have made this shift, especially companies that have already needed to project time past 2038, like banks that need to deal with 30-year mortgages.

64 bitsApple (AAPL) claims that the iPhone 5S is the first 64-bit smartphone. But the 2038 problem applies to both hardware and software, so even if the 5S uses 64 bits, an alarm clock app on the phone needs to be updated as well. (If it’s using a 32-bit system in 2038 it will wake you up in 1970, so to speak.) So the issue is more of a logistical problem than a technical one.

HowStuffWorks reports that some platforms have different dooms-days.

  • IBM (IBM) PC hardware suffers from the Year 2116 problem. For a PC the beginning of time starts at January 1, 1980, and increments by seconds in an unsigned 32-bit integer in a way like UNIX time. By 2116, the integer overflows.
  • Hardware and softwareMicrosoft (MSFT) Windows NT uses a 64-bit integer to track time. However, it uses 100 nanoseconds as its increment and the beginning of time is January 1, 1601, so NT suffers from the Year 2184 problem.
  • On this page, Apple states that the Mac is okay out to the year 29,940!

rb-

The tech industry’s response to Y2K suggests that they will mostly ignore the 2038 issue until the very last minute when it becomes to ignore.  Another example of the pace of global software updates is that a majority of ATM cash machines were still running Windows XP, and thus vulnerable to hackers even though Microsoft discontinued the product in 2007.

Dont worryFortunately, the 2038 problem is somewhat easier to fix than the Y2K problem. Well-written programs can simply be recompiled with a new version of the C-library that uses 8-byte values for the storage format. This is possible because the C-library encapsulates the whole time activity with its own time types and functions (unlike most mainframe programs, which did not standardize their date formats or calculations). So the Year 2038 problem should not be nearly as hard to fix as the Y2K problem was.

Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.