Tag Archive for Microsoft

What is Quantum Computing?

What is Quantum Computing?The world of theoretical physics has been the domain of geniuses like Stephen Hawking and fictional characters such as The Big Bang Theory’s Sheldon Cooper. But now companies like Google (GOOG), IBM (IBM), and Intel (INTC) are building quantum computer systems, that may soon outperform even the fastest supercomputers in the world. So, it’s a good time to learn some basic quantum computing terms and concepts.

It’s physics

Quantum mechanicsQuantum Computing is based on Quantum Physics. Quantum Physics is the arm of modern physics that explains the nature and behavior of matter and energy on the atomic and subatomic levels. It is also called quantum theory and quantum mechanics. Quantum computers use quantum physics to compute.

Before quantum physics, “classical” physics explained the world around us (calculations of speeds, rotations, weights, forces …).  Then came Einstein who explained the “infinitely large”, the universe, time, big bang, black holes… But the classic mechanics did not explain everything and this is where quantum physics, steps in. The world of atoms, the infinitely small, does not work like the world that we, humans, see every day. The algebra story problems about a ball bouncing off a wall at 37 degrees and landing 43 feet away no longer apply in the world of quantum physics. Quantum computing devices use these newly discovered properties to perform computations using quantum bits, or qubits.

Classical computers

EinsteinPierre Pinna at IPFCOnline explains that the “classical” computer sitting on your desk, manipulates information (software, texts, pictures, videos, etc.). Inside your laptop, this information is made up of “1” and “0”. All computers have one (or more) micro-processors that manipulate the “0” and “1”, by applying the basic operations (addition, subtraction, multiplication) to “order” the 1’s and 0’s into software, texts, pictures, videos, etc.

The 1’s and 0’s are physically created by electric current inside transistors. Each transistor can be on or off, which indicates the 1 or 0 to be used to compute the next step in a program.

When the transistor is open, the electric current does not pass through the transistor and we say that we are in the state “0” and conversely if the transistor is closed, the electrical current can pass through it, we are in state “1”. The transistors inside the CPU can be combined into logic gates to perform logic operations like “OR”, “XOR”, “AND.” The classical computers 1’s and 0’s are called “bits.”

Quantum computers

Quantum bitsQuantum computers also handle “1” and “0” just like your laptop. But the information is no longer manipulated by transistors but by atomic and subatomic particles (electrons, protons, ions, photons, neutrons, etc.). You know, the stuff they taught in Mr. Birchmeier’s high school science class. Quantum computers don’t use bits; they use quantum bits (qubits). And that’s where quantum computing gets interesting – the subatomic world does not work like the physical world we live in.  Quantum physics explains how the subatomic world works.

Tristan Greene at TNW writes that qubits have extra functions that bits don’t. Instead of only being represented as a 1 or 0, qubits can actually be both at the same time. Mr. Greene writes that qubits, when unobserved, are considered to be “spinning.” Instead of referring to these types of “spin qubits” using ones or zeros, they’re measured in states of “up,” “down,” and “both.”

This lab at IBM houses quantum machines connected to the cloud.

The IPFCOnline article explains that to better understand all of this, we must see each particle as a wave and not a single physical element. The particles are then characterized by their “spin” to create a state called superposition.

Mr. Greene at TNW writes that quantum superposition in qubits can be explained by flipping a coin. We know that the coin will land in one of two states: heads or tails. This is how classical computers think. While the coin is still spinning in the air, the coin is actually in both states at the same time. Essentially until the coin lands, it has to be considered both heads and tails simultaneously.

Quantum computing use superposition

Observation theorySuperposition is based on Observation theory. Observation theory basically says the universe acts one way when we’re looking, another way when we aren’t. Mr. Pinna at IPFCOnline writes that with superposition, while we do not know what the state of any object is, it is actually in all possible states simultaneously, as long as we don’t look to check. To illustrate this theory, we can use the famous and somewhat cruel analogy of Schrodinger’s Cat using a cat in a box as being both alive and dead at the same time.

All of these sub-atomic activities make the quantum computer very sensitive to disturbances from the outside world. When quantum computers are disturbed they become unstable, and revert to “classical computers.” In order to keep the quantum properties of the system, it must be protected from the outside world. According to the article, this is typically done by cooling the quantum computer to temperatures very close to absolute zero (-273.145°C – colder than in space). Another factor when working with qubits is noise. The more qubits a system has, the more errors you get.

All of these factors make working with qubits incredibly difficult. These challenges are made worse by the unsustainable amount of electricity currently needed to generate quantum computing results. Reports are that one quantum computer burns about 20 megawatts of electricity — enough to power 20,000 households.

Therefore, the current state-of-the-art quantum computing theoretical speed gain is limited by the cost, size, and instability of the system. Right now, quantum computers aren’t worth the trouble and money they take to build and operate. A quantum computer is not going to run MS Word on your desktop.

Related article

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Happy Birthday to IPv6

Happy Birthday to IPv6You are forgiven if you missed IPv6’s birthday (I did). The next-generation network addressing scheme turned 6 years old back in June. June 06, 2012, was World IPv6 Launch Day when everybody was supposed to permanently enable IPv6 on their networks. The results – not so good. There are global highlights but 3/4’s of internet users still regularly connect to the Intertubes over legacy IPv4.

The Internet Society rightly points out that enterprise operations tend to be the “elephant in the room” when it comes to IPv6 deployment. If only 26% of networks advertise IPv6 autonomous system prefixes, 74% do not. Most of the 3/4ths not using IPv6 are likely to be enterprise networks.

Enterprises have traditionally been reluctant to embrace IPv6 — there has been no real need to implement it, with many seeing it as an additional cost and risk with no direct use for their daily business.  Cost can include monetary assets, but also people and time

IPv6Migrating to IPv6 will be hard. The migration will involve all departments of the organization and every piece of equipment connected to the network. Then consider that the migration will be made over time and that everyone needs to be on the same page working together for the best outcome and smoothest transition.

Legacy systems can be defined basically as older systems. They likely are missing some common functionality from current technology, but still exist because they perform a key or important function for the organization just fine, thus there is no reason to replace it. However, this attitude is starting to change.

Microsoft logoLarger and more tech-savvy enterprises are forging innovative paths forward. CircleID points out Microsoft (MSFT), which made one of the first publicly announced purchases of IPv4 address space, reportedly purchasing 666,000 addresses at $11.25 per address in 2011. In a recent blog, Microsoft described the steps is taking to turn off IPv4 and become an IPv6-only company. Their description of their heavily translated IPv4 network includes phrases like “potentially fragile”, and “operationally challenging”, and about dual-stack operations, “complex”.

Outside of the enterprise space, there’s still the rest of the Internet that needs to make the migration. According to the stats in the article, the top carriers in the U.S. still carry less than half of the IPv6 traffic that the Indian ISP Reliance Jio carries. The Internet Society takes the happy view that the excuse that “no one is doing IPv6” is gone. For many people and networks, IPv6 is the new normal and is the future of Internet connectivity.

Some of the highlights for IPv6 are:

  • 237 million people in India connect over IPv6.
  • Mobile operators are adopting IPv6, some have over 80 or 90% of their devices connecting over IPv6.
  • 28% of the Alexa Top 1000 websites are IPv6-enabled.

ISOC - State of IPv6 Deployment 2018

 

National mobile networks are driving the global adoption of IPv6. Some mobile networks are taking the step to run IPv6-only to simplify network operations and cut costs. Japan and India are leaders in IPv6 adoption.

Reliance JIOThe Indian wireless carrier Reliance Jio has an 87% IPv6 rate.

In Japan, the top three wireless carriers are:

U.S. wireless carriers are deploying IPv6 also:

Many home and business users get Internet connectivity from broadband ISPs. Many broadband ISPs have deployed IPv6 on their networks. They send the majority of their traffic over IPv6 to major content providers. For example, Comcast (CMCSA), the largest broadband ISP in the U.S. is actively deploying IPv6. Per the World IPv6 Launch website, Comcast has an IPv6 deployment measurement of over 66%. Globally broadband ISPs are also deploying IPv6.

The following table from the Internet Society lists the top IPv6 carriers based on the number of users.

RankISPCountryIPv6 Users (estimated)
1Reliance JioIndia237,600,764
2ComcastUnited States36,114,435
3AT&TUnited States22,305,974
4Vodafone IndiaIndia18,368,165
5Verizon WirelessUnited States15,422,684
6Idea CellularIndia14,681,694
7Deutsche Telekom AGGermany14,261,836
8T-Mobile USAUnited States14,057,105
9KDDI CorporationJapan11.871,952
10Sky BroadbandGreat Britian11,829,610
11ClaroBrazil10,235,805
12SoftbankJapan8,613,145
13OrangeFrance7,924,119
14AT&T WirelessUnited States7,694,881
15Cox CommunicationsUnited States6,316,462
16Kabel DeutschlandGermany5,835,590
17SK TelecomKorea5,764,073
18NTT CommunicationsJapan5,596,206

 

Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Undersea Data Center

Updated 08/09/2019 – Microsoft has installed two underwater cameras that offer live video feeds of the sunken data center. You can now watch all kinds of sea creatures swimming around a tank that holds 27.6 petabytes of data.

Undersea Data CenterFollowers of the Bach Seat know that Microsoft (MSFT) has experimented with undersea data centers to save costs associated with deploying data centers. Back in 2015, I wrote about MSFT’s initial experiment off the California coast where MSFT first tried out the idea of an underwater data center. Redmond has announced phase 2 of Project Natick. Phase 2 of Project Natick is designed to test the practical aspects of deploying a full-scale lights-out data center underwater called, “Northern Isles.”

Undersea Data CenterKurt Mackie wrote in an article at Redmond Magazine that Microsoft is testing this underwater data center off the coast of Scotland near the Orkney Islands in the North Sea. Microsoft wants to place data centers offshore because about half the world’s population lives within 125 miles of a coast. Locating data closer to its users reduces latency for bandwidth-intensive applications such as video streaming and gaming, as well as emerging artificial intelligence-powered apps. Latency is the time it takes data to travel from its source to customers. It is like the difference between using an application on your hard drive vs. using off the network.

Mr. Mackle posts that the original underwater data center had the computing power of 300 PCs, Phase 2’s computing power is equal to “several thousand high-end consumer PCs,” according to Microsoft’s FAQ page. This next-generation underwater data center requires 240KW of power, is 40 feet in length, and holds 12 racks with 864 servers. The submarine container is mounted on a metal platform on the seafloor 117 feet deep. The Phase 2 data center can house 27.6 petabytes of data. A fiber-optic cable keeps it connected to the outside world. Naval Group, a 400-year old French company built the submarine part of the project.

The interesting part (U.S. Navy submarines have had computers onboard for years) is the lights-out part. Lights out allow Microsoft to change up how data centers are deployed. Northern Isles’s cooling techniques are changed. The cold-aisle temperature is kept at a chilly 54F (12C) to remove the stress temperature variations place on components. This temperature is maintained by using a heat-exchange process developed for cooling submarines. Ben Cutler, Microsoft Research Project Natick lead told Data Center Knowledge, “... by deploying in the water we benefit from ready access to cooling – reducing the requirement for energy for cooling by up to 95%.”

heat exchangerWith Phase 2, Mr. Cutler explained to DCK there’s no external heat exchanger, “We’re pulling raw seawater in through the heat exchangers in the back of the rack and back out again.” This cooling system could cope with very high power densities, such as the ones required by GPU-packed servers used for heavy-duty high-performance computing and AI workloads.

According to DCK the first iteration of Project Natick had a Power Usage Effectiveness (PUE) rating of 1.07 (compared to 1.125 for Microsoft’s latest-generation data centers). The lower the PUE metric, the more efficiently the data center uses electricity. Microsoft hopes to improve the PUE for the phase 2 data center.

off-the-grid tidal power.Data centers are believed to consume up to 3% of the world’s electricity. The new cooling options change up the Northern Isles data center power requirements. It can run off the Orkney Islands’ local electrical grid which is powered by renewable wind, solar and tidal sources. One of the goals of the project is to test powering the data center with an off-the-grid source, such as using nearby tidal power.

Future versions of the underwater data center could also have their own power generation. Mr. Cutler told DCK, “Tide is a reliable, predictable sort of a thing; we know when it’s going to happen … Imagine we have tidal energy, we have battery storage, so you can get a smooth roll across the full 24-hour cycle and the whole lunar cycle.”

This would allow Microsoft to do away with backup generators and rooms full of batteries. They could over-provision the tidal generation capacity to ensure reliability (13 tidal turbines instead of 10, for example). Mr. Cutler says, “You end up with a simpler system that’s purely renewable and has the smallest footprint possible.”

 Northern Isle underwater data centerThe Northern Isle underwater data center is designed to run without being staffed. This fact cuts down on human errors. It is designed with a “fail-in place” approach where failed components are not serviced, they are just left in place. Operations are monitored by artificial intelligence. Mr. Cutler said, “There’s a lot of data showing that when people fix things they’re also likely to cause some other problem.

By operating in ‘lights out’ node with no human presence, allows most of the oxygen and water vapor to be removed from Northern Isles’ atmosphere. MSFT replaced Oxygen with 100% dry nitrogen. This environment should greatly cut the amount of corrosion in the equipment, a major problem in data centers on land.  Mr. Cutler told DCK, “With the nitrogen atmosphere, the lack of oxygen, and the removal of some of the moisture is to get us to a better place with corrosion, so the problems with connectors and the like we think should be less.

The Redmond Magazine article says Project Natick’s phase 2 has already proved that it’s possible to deploy an underwater data center in less than 90 days “from the factory to operation.” The logistics of building underwater data centers are very different from building data centers on land. Northern Isles was manufactured via a standardized supply chain, not as a construction process.  Mr. Cutler said, “Instead of a construction project, it’s a manufactured item; it’s manufactured in a factory just like the computers we put inside it, and now we use the standard logistical supply chain to ship those anywhere.

standard ISO shipping containerThe data center is more standardized. It was purposely built to the size of a standard ISO shipping container. It can be shipped by truck, train or ship. Naval Group shipped Northern Isles to Scotland on a flatbed truck. Mr. Cutler told DCK, “We think the structure is potentially simpler and more uniform than we have for data centers today … the expectation is there actually may be a cost advantage to this.”

The rapid time to deploy these data centers doesn’t only mean expanding faster, it also means spending fewer capital funds. Mr. Cutler explained, “It takes us in some cases 18 months or two years to build new data centers … Imagine if instead … where I can rapidly get them anywhere in 90 days. Well, now my cost of capital is very different … As long as we’re in this mode where we have exponential growth of web services and consequently data centers, that’s enormous leverage.

rb-

If Project Natick stays on the same trajectory, MSFT could bring data centers to any place in the developed or developing world without adding more stress on local infrastructure. MSFT’s Cutler told DCK “There’s no pressure on the electric grid, no pressure on the water supply, but we bring the cloud.”

As more of the world’s population comes online, the need for data centers is going to skyrocket, and having a fast, green solution like this would prove remarkably useful.

Related article

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

ATM Jackpotting

ATM JackpottingThe U.S. Secret Service has warned (PDF) financial institutions of logical (jackpot) attacks on Automated Teller Machines (ATMs). These ATM attacks originated in Mexico and have spread to the US. These jackpotting attacks are an industry-wide issue and as one vendor stated, are “a call to action to take appropriate steps to protect their ATMs against these forms of attack and mitigate any consequences.”

The attack mode involves a series of steps to defeat the ATM’s existing security mechanisms and the authorization process for setting the communication within the ATM. Internal communications are used when computer components like the mainboard or the hard disk have to be exchanged for legitimate reasons.

Description of an ATM attack

Automated Teller Machines (ATMs)In a Jackpotting attack, the criminal gains access to the internal infrastructure of the terminal to infect the ATM PC or by completely exchanging the hard disk (HDD). There are a number of steps the attacker has to take for this type of attack:

  1. The top of the ATM must be opened.
  2. The original hard disk of the ATM is removed and replaced by another hard disk, which the attackers have loaded with an unauthorized and/or stolen image of ATM platform software.
  3. In order to pair this new hard drive with the dispenser, the dispenser communication needs to be reset, which is only allowed when the safe door is open. A cable in the ATM is unplugged to fool the machine into allowing the crooks to add their bogus hard drive to the ATM.
  4. A dedicated button inside the safe needs to be pressed and held to start the dispenser communication. The crooks insert an extension into existing gaps next to the presenter to depress the button. CCTV footage has shown that criminals use an industrial endoscope to complete the taskATM's

In other Jackpotting attacks, portions of a third-party multi-vendor application software stack to drive ATM components are used. Brian Krebs at Krebs on Security reports that Secret Service issued a warning that organized criminal gangs have been attacking stand-alone ATMs in the United States using “Ploutus.D,” an advanced strain of jackpotting malware first spotted in 2013.

Mr. Krebs also reports that “During previous attacks, fraudsters dressed as ATM technicians and attached a laptop computer with a mirror image of the ATMs operating system along with a mobile device to the targeted ATM. Once this is complete, fraudsters own the ATM and it will appear Out of Service to potential customers according to the confidential Secret Service alert. At this point, the crook(s) installing the malware will contact co-conspirators who can remotely control the ATMs and force the machines to dispense cash.

In previous Ploutus.D attacks, the ATM Dispensed at a rate of 40 bills every 23 secondscontinuously dispensed at a rate of 40 bills every 23 seconds,” the alert continues. Once the dispense cycle starts, the only way to stop it is to press cancel on the keypad. Otherwise, the machine is completely emptied of cash, according to the alert. While there are some risks of the money mule being caught by cameras, the speed in which the operation is carried out minimizes the mule’s risk.”

Specific Guidance and Recommendations

The most common forms of logical attack against ATMs are “Black Box” and “Offline Malware”. The steps to minimize the risks to ATMs are the same as any other enterprise device.

  1. Make sure firmware and software are current with the latest updates, are important protections to mitigate the impact of Black Box attacks. Four out of five cash machines still run Win XP or Win XP Embedded. The Secret Service alert says ATMs still running on Windows XP are particularly vulnerable, and it urged ATM operators to update to at least Windows 7 to defeat this specific type of attack.
  2. Use secure hard drive encryption protections against Offline Malware
  3. Use a secure BIOS remote control app to lock the ATM BIOS configuration and protect the configuration with a password.
  4. Deploying an application whitelisting solution.
  5. Limit Physical Access to the ATM:
    • Use appropriate locking mechanisms to secure the head compartment of the ATM.
    • Control access to areas used by staff to service the ATM.
    • Implement two-factor authentication (2FA) controls for service technicians.
  6. Set up secure monitoring
  7. Use the most secure configuration of encrypted communications. In cases where the complete hard disk is being exchanged, encrypted communications between ATM PC and dispenser protect against the attack.
    • Ensure proper hardening and real-time monitoring of security-relevant hardware and software events.
    • Investigate suspicious activities like deviating or non-consistent transaction or event patterns, which are caused by an interrupted connection to the dispenser. Monitor unexpected opening of the top hat compartment of the ATM.

rb-

Followers of the Bach Seat know how to secure their PCs, I have written about securing PCs many times here. So the question is why not ATMs? Research says that consumers go into the branch less every year. The experts say that by 2022 customers will visit a branch only 4 times a year. In many cases, ATMs are the bank’s surrogates for most cash transactions. It makes sense to get it right.

Related article

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Barracuda Networks Has Been Bought

Barracuda Networks Has Been BoughtWhile the massive Equifax data breach is still fresh in everyone’s minds and the cybersecurity workforce is expected to be short nearly 2 million people. IT security expenditures to top $1 Trillion by 2022. Private equity giant Thoma Bravo, LLC has jumped back into the IT security market with both feet. Barracuda Networks has been bought by the private equity firm in a deal that’s valued at $1.6 billion.

BarracudaBarracuda (CUDA) sells appliance and cloud-based cybersecurity and data protection services. Clients include; Boeing, Microsoft and the U.S. Department of Defense. Barracuda says it has over 150,000 customers. Upon the close of the transaction, Barracuda will operate as a privately held company.

Barracuda Networks has been bought

Barracuda Network was founded in Ann Arbor, Michigan in 2003. From Ann Arbor, it raised at least $46 million in venture funding prior to its IPO. CUDA went public on the New York Stock Exchange in November 2013, pricing its IPO at $18. Barracuda acquired Yosemite Technologies in 2009 to expand its offerings into the storage market.

Barracuda NexGen FirewallBarracuda continued to innovate in the run-up to its acquisition. eWeek reports that in March 2017, Barracuda debuted new data backup and recovery capabilities for VMware and Microsoft virtual machines. In June 2017 Barracuda announced its new Sentinel service. The service uses artificial intelligence (AI) and container-based technologies to improve email security.

Barracuda also enhanced its network security products and services in 2017. eWeek reported in November that the company expanded the cloud capabilities for its Web Application Firewall (WAF) and NexGen Firewall products. The new capabilities include usage-based billing for the NextGen firewall running in the Amazon Web Services (AWS) cloud. The firewall included automated configuration capabilities for the WAF, thanks to an integration with the Puppet DevOps tool.

CEO BJ Jenkins commented on the transaction, “We will continue Barracuda’s tradition of delivering easy-to-use, full-featured solutions that can be deployed in the way that makes sense for our customers.

Thoma Bravo

Thoma Bravo is a Chicago-based private equity firm with $17 billion under management. Their appetite for IT firms is rather broad. Some of it’s most notable purchases have been:

  • Thoma Bravo is a Chicago-based private equity firmSeptember 2014 – $2.4 billion purchase of Detroit-based Compuware.
  • December 2014 – $3.6 billion acquisition of Riverbed.
  • In October 2015, they teamed up with Silver Lake to buy IT infrastructure management vendor SolarWinds for $4.5 billion.
  • April 2017 – Purchased a minority stake in the freshly re-spun McAfee.
  • June 2017 they purchased Remote Monitoring and Management (RMM), IT security management vendor Continuum.

Their portfolio has included brands such as; Bomgar, Digicert, Digital Insight, Dynatrace, Hyland Software, Imprivata, iPipeline, Nintex, PlanView, Qlik, SailPoint, and SonicWall.

Thoma Bravo has resold many of its holdings in recent years.

TechCrunch notes that private equity firms began more aggressively buying up software companies last year. The thinking seems to be they can generate reliable returns from such investments. The biggest take-private deals lately include:

  • Marketo, a marketing software maker. Went public in 2013 and was taken private again by Vista Equity Partners in 2017 for $1.79 billion in cash;
  • The sale of event-management company Cvent last year to Vista Equity Partners in a $1.65 billion deal.
  • Cybersecurity risk-monitoring platform SecurityScorecard raised $27.5 million from the VC arms of Google, Nokia, and Intel.

Other notable IT security equity funding recipients include; Attivo NetworksDarktrace, and SentinelOne.

Investopedia speculates that Thoma Bravo is paying a pretty high premium for Barracuda. CUDA now trades at 139 times earnings and 4 times sales. But under private management, its products will likely be integrated with the firm’s other software products to generate synergies.

CRN notes that being a privately owned company will give Barracuda a stronger ability to chart its own destiny. They will not have to “tap-dance to the Wall Street music,” Michael Knight, president and chief technology officer at solution provider Encore Technology Group, Greenville, S.C., said. He hopes Thoma Bravo’s infusion of capital will enable Barracuda to continue driving its public cloud business, a more solidified SD-WAN toolset, and more integrated endpoint security protection.

Rb-

I have used Barracuda products at past jobs. Including their SPAM-Email firewall appliances and their cloud-based backup up system. The pricing was adequate. Renewals were easy. The email firewalls were really robust and almost set and forget.

The few times when I needed tech support, it was available in Ann Arbor, Michigan. Barracuda, founded in Ann Arbor, was one of the early believers in the area as a high-tech hub. Barracuda has plans to spend  $2.3 million on the expansion of its operations center in the former Borders Books offices at 317 Maynard Street. The expansion will add 115 new jobs in downtown Ann Arbor over the next four years. I hope that after Barracuda Networks has been bought by Thoma Bravo, the deal does not have a “Chainsaw Al” that will kill that growth.

Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.