Archive for RB

Black Hole Data

Black Hole Data The first image of a black hole was published on April 10, 2019. The black hole, M87* at the center of the Messier 87 galaxy is located 53 million light-years away from Earth. NASA says a black hole is an extremely dense object from which no light can escape. Anything that comes within a black hole’s “event horizon,” will be consumed, because of the black hole’s unimaginably strong gravity.

the first image of a black hole

By its very nature, a black hole cannot be seen, the bright ring in the picture is the event horizon, the point where an object approaching a black hole is unable to escape its gravitational pull. Objects that pass into the event horizon go through spaghettification, a process, first described by Stephen Hawking, where gravitational forces stretch the object out like a piece of pasta. The M87* image shows a silhouette of the black hole against the glow of the event horizon captured by researchers at the Event Horizon Telescope (EHT).

APEX Atacama Pathfinder Experiment antenna.The EHT is the brainchild of Shep Doeleman, the director of EHT and astronomer at the Harvard-Smithsonian Center for Astrophysics. It is a virtual global array of eight ground-based radio telescopes. The EHT captured around 3.5 PB of data for the black hole image in April 2017. It then took two years to correlate the data to form the image. The EHT team not only had to figure out intergalactic science but also massive information technology problems. The researchers had to solve IT problems pretty typical for enterprise IT professionals, only bigger.

According to an article at SearchDataBackup each EHT telescope can record data at a rate of 64 Gbps, and each observation period can last more than 10 hours. The author calculated that each site generated around half a petabyte of data per run. The distributed locations included volcanoes in Hawaii and Mexico, mountains in Arizona and the Spanish Sierra Nevada, the Chilean Atacama Desert, and Antarctica. The sites were kept in sync using precise atomic clocks and GPS systems to carefully time the observations.

The data from each telescope was recorded at 16 Gbps and distributed among a total of 32 hard disk drives grouped into 4 modules of 8 disks each. The EHT can record a total rate at each site of 64 Gbps by using 4 units in tandem.

Sites making up the virtual Event Horizon Telescope.

 

One problem EHT ran into was the failure rate of traditional hard drives in the extreme telescope locations. ComputerWorld reports that 28 of 32 conventional hard drives failed at the Sierra Negra telescope, on the top of an extinct volcano in Mexico.

WD 10TB helium disk driveSearchDataBackup says the solution was helium hard drives. The hermetically sealed helium drives are self-contained environments, so they could survive the extreme environments in which EHT’s telescopes operated. EHT first deployed helium hard drives in 2015. EHT data scientist Lindy Blackburn told SearchDataBackup that EHT now uses about 1,000 helium drives with up to 10 TB of capacity from Western Digital, Seagate, and Toshiba. He told SearchDataBackup,

The move to helium-sealed drives was a major advancement for the EHT … Not only do they perform well at altitude and run cooler, but there have been very few failures over the years. For example, no drives failed during the EHT’s 2017 observing campaign.

The amount of data collected by EHT was too much to send over the Internet so the researchers went old-school and used FedEx sneakernet style to send the data to be processed. Geoffrey Bower an astronomer in Hawaii told ScienceNews that mailing the disks is always a little nerve-wracking. So far, there have been no major shipping mishaps. But the cost and logistics involved with tracking and maintaining a multi-petabyte disk inventory is also challenging. Therefore, EHT is always on the lookout for another method to move petabyte-scale data.

Cloud computing

`

 SearchDataBackup points out that normally the cloud would be a good option for long-term storage of unifying data sourced from multiple, globally distributed endpoints. However, Mr. Blackburn told them the cloud was not a cold storage option for the project. He said the high recording speed and the sheer volume of data captured made it impractical to upload to a cloud. He explained, “At the moment, parallel recording to massive banks of hard drives, then physically shipping those drives somewhere is still the most practical solution.”

The data collected on the helium hard disk drive packs were processed by a grid computer made of about 800 CPUs all connected through a 40Gbps network at the MIT Haystack Observatory MA, and the Max Planck Institute for Radio Astronomy in Germany.

Katie Bouman is the MIT student who developed the algorithm that pieced together the data from the EHT with disk drives

Geoff Crew, co-leader of the EHT correlation working group at Haystack Observatory told SearchDataBackup It is impractical to use the cloud for computing. Mr. Crew said;

Cloud computing does not make sense today, as the volume of data would be prohibitively expensive to load into the cloud and, once there, might not be physically placed to be efficiently computed.

The EHT scientists built algorithms that converted sparse data into images. They developed a way to cut the number of possible images by sorting out which results were physically plausible and which were wildly unlikely making it less hard to create the images.

The Haystack VLBI Correlator grid computer at the MIT Haystack Observator

Converting sparse data into images matters beyond astronomy. Mr. Blackburn told 538 the problem comes up in other areas as well; it occurs in medical imaging when doctors use MRIs to convert radio waves into pictures of your body. It’s also a key part of self-driving cars, which rely on computer visualization to “see” everything from potholes to people.

data protectionJust like any enterprise, EHT had to find a workable method of data protection. That includes deciding what won’t be protected. EHT has not found a cost-effective way to replicate or protect the raw radio signal data from the telescope sites. However, once the data has been processed and reduced to tens of petabytes it is backed up on-site on several different RAID systems and on Google Cloud Storage. Mr. Crew told SearchDataBackup;

The reduced data is archived and replicated to a number of internal EHT sites for the use of the team, and eventually, it will all be publicly archived. The raw data isn’t saved; we presently do not have any efficient and cost-effective means to back it up.

Mr. Blackburn said the raw data isn’t worth backing up. Because of the complexity of protecting such a large amount of data, it would be simpler to run another observation and gather a new set of data. Mr. Blackburn said; “Backing up original raw data to preserve every bit is not so important.”

Mr. Blackburn said he can’t seriously consider implementing a backup process unless it is “sufficiently straightforward and economical.

Instead, he said he’s looking at where technology might be in the next five or 10 years to find the best method to handle petabyte-scale raw data from the telescopes. Mr. Blackburn told SearchDataBackup;

Right now, it is not clear if that will be continuing to record to hard drives and using special-purpose correlation clusters, recording to hard drives and getting the data as quickly as possible to the cloud, or if SSD or even tape technology will progress to a point to where they are competitive in both cost and speed to hard disks

rb-

The image of the black hole validated Einstein’s general theory of relativity and proves that enterprise-class IT can solve intergalactic problems.

The EHT team had to figure out how to save, move and backup massive quantities of data and of course do more with less. EHT’s Geoff Crew summed up the problem most IT pros have; “Most of our challenges are related to insufficient money, rather than technical hurdles.”

Related articles
  • Trolls hijacked a scientist’s image to attack Katie Bouman. They picked the wrong astrophysicist. (MSN)

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Will 5G Save Broadband Over Power Line?

Will 5G Save Broadband Over Power Lines?5G could resurrect broadband over powerline (BPL). At least that is the hope of AT&T (T). For those who don’t remember the heady rise of BPL, it grew out of an attempt to use the existing electrical grid to deliver broadband everywhere without having to build infrastructure anywhere. A few broadband over power line systems with paying customers got off the ground but they were all gone by the end of 2010.

ATT logoAT&T started testing BPL renamed Project AirGig in 2016. And now in 2019 FierceWireless is reporting AT&T is planning more trials of AirGig that will involve 5G. The telecom behemoth is also working with vendors and technology partners to build commercial-grade 5G equipment for those trials.

Hank Kafka, vice president of access architecture and standards at AT&T, told FierceWireless the company isn’t ready to offer details but said Project AirGig is making progress and it will be a very complementary technology to 5G. He told the author that “5G is very high on that list.”

Specifically, AirGig could be used to extend 5G millimeter wave (mmWave) signals beyond their current range. The article says AT&T has launched a mobile 5G service in 19 markets so far using mmWave spectrum, but using that spectrum has drawbacks because it has a limited range compared to lower spectrum bands.

Air5G small cellGig technology includes a radio distributed antenna system (RDAS) and mmWave surface wave launcher. The RDAS reconstructs signals for multigigabit mobile and fixed deployments. The mmWave surface wave launchers can power themselves using inductive power devices without an electrical connection. These devices then create a high-speed signal that travels along or near the wire, providing a broadband connection.

I covered AT&T’s 2018 AirGig trial here. In that trial with Georgia Power Company, the telco used LTE as the transport technology.  Mr. Kafka explained that in 2018 suitable 5G equipment was not available. “At the time of the trial 5G equipment was large and bulky,” he said. Now that 5G is commercially deployed in some markets AT&T is working with vendors to get the right type of gear for the new trials.

Not only could AirGig potentially extend the reach of 5G, but it could also be used as a backhaul technology. “If you set up an architecture where AirGig is connecting to 5G radios, it is acting like backhaul … And you can get gigabit speeds and beyond.”

5G backhaulHe also said that commercialized AirGig would be a good fit for small cells because of the way it is architected. In other words, a wireless signal could travel down the power line and handoff to small cells or be used to backhaul wireless traffic from small cells. This could be profitable for carriers who are getting resistance from municipalities over the siting of their small cells for 5G. AirGig might allow small cells to be co-located with utility infrastructure.

rb-

Apparently, AT&T doesn’t have plans to commercially deploy AirGig in the near term, but it has rolled out 5G service in 19 U.S. cities that could benefit from the goals of the BPL AirGig experiment including:

  • CA: Los Angeles, San Diego, San Francisco, San Jose
  • 5G cell phone userFL: Jacksonville, Orlando
  • GA: Atlanta
  • IN: Indianapolis
  • KY: Louisville
  • LA: New Orleans
  • NC: Charlotte, Raleigh
  • OK: Oklahoma City
  • TN: Nashville
  • TX: Austin, Dallas, Houston, San Antonio, Waco

Maybe AirGig is on the slow track because there aren’t any smartphones that can use it yet. The Verge points out that AT&T’s only available true 5G device is a mobile hotspot that can’t be purchased in stores.

Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Tax Day 2019

Tax Day 2019Just in time for Tax Day 2019, the gooberment takes another step backward. ProPublica reports that the so-called Taxpayer First Act is making its way through Congress. Included in the Taxpayer First Act, is a law that would prevent the IRS from creating its own online system of tax filing. A companion Senate bill with the same provision was introduced by Sens. Chuck Grassley, R-IA, and Ron Wyden, D-OR.

TurboTax, and H&R Block have lobbied for years to block the IRSIf the tax agency created its own program, it would threaten the tax perpetration industry’s profits. Companies like Intuit, the maker of TurboTax, and H&R Block have lobbied for years to block the IRS from creating such a system. Hefty lobbying spending and campaign contributions by the tax preparation industry have fueled the efforts to block modernization of the way Americans file their taxes.

Intuit and H&R Block are blocking change

Intuit and H&R Block have poured a combined $6.6 million into lobbying related to the IRS filing deal and other issues. Rep. Richard Neal, D-MA, led the effort to pass the bill, received $16,000 in contributions from Intuit and H&R Block in the last two election cycles.

zero effort tax systemGizmodo describes how the free, zero-effort tax system works in Japan, which employs a withholding tax system. If you’re gainfully employed, your employer just deducts however much you’re supposed to pay and files for you. Most people get a postcard from the Japanese equivalent of the IRS in spring that shows them how much they earned, how much they owe, and how much was withheld. Any adjustments just automatically show up in your paystub at the end of the fiscal year. It took a minute and a calculator to check the government math.

This could be in America too. Those annoying W-2 forms your company mails you are also sent to the IRS. The same goes for investment tax forms, 1099s, and all the other official paperwork. The IRS could use these new-fangled computers and the Intertubes to pre-fill out your taxes and send them to you online. You can go with the goobernment’s version or file your adjustments.

rb-

Merica!So it’s 2019 Amazon, Google, Facebook, and who knows who else knows everything about me. I can use my smartphone to socialize, buy a car, order a pizza, talk to my plants, or check my umbrella but I can’t file my taxes online because of lobbyists. Merica!

Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Chapter 11 Reboot for Sungard AS

Updated 11/18/2022 –  11:11 Systems has completed the acquisition of Sungard Availability Services’  Recovery Services and Sungard AS’ Cloud and Managed Services business’.

Updated 05/08/2019 – Sungard AS emerged from bankruptcy on 05/07/2019. The firm’s turnaround is described as the fastest pre-negotiated restructuring in US corporate history. The result is Sungard AS debtors having taken an $800 million haircut, the recovery service received $100 million of new liquidity from its creditors and a new CEO.

The firm’s new ownership and largest shareholders now include Angelo, Gordon & Co., LP; The Carlyle Group Global Credit; FS Investments and GSO Capital Partners LP.

However, the quick fix did not solve the problems that forced the firm into bankruptcy, as described below.

Data infrastructure and disaster recovery company Sungard Availability Services (AS) announced it was filing for bankruptcy on April 01, 2019. Sungard AS, which helped keep Wall Street running through 9/11, says its customers include 70 percent of Fortune 100 companies. It boasts 90 hardened IT facilities connected by a redundant, dedicated network backbone, along with 18 mobile facilities staged in strategic locations is saddled with hefty debt from its private equity backers.

Sungard ASIn addition to a huge debt load, the once high-flying Pennsylvania-based firm faces falling margins as it struggles with growing competition from cloud rivals amid a shift away from on-premises/co-location backup. These factors forced the firm to seek relief from the courts.

The Sungard AS Chapter 11 plan is expected to be filed in New York during May 2019. The bankruptcy plan reportedly includes a write-off $800 million of the company’s $1.25 billion debt. Chapter 11 protection is a part of the US Bankruptcy Code that allows a company to reorganize its assets while handing over the business operations of the company to its debtors.

Sungard AS locations

Under the Chapter 11 proposal, hedge fund creditors that specialize in turnarounds and liquidations, sometimes dubbed “vulture capitalists” — including Blackstone Group’s LP’s GSO debt investment unit, Angelo Gordon & Co., Carlisle Group, and Contrarian Capital Management — will take control of Sungard Availability.

The hedge fund will replace the buyout investors who bought the formerly publicly traded company for $11.4 billion in 2005. The original private equity sponsors include: — Bain Capital, Blackstone Group, Providence Equity Partners, KKR & Co., Silver Lake Management, and Texas Pacific Group (TPG) Capital.

Wall Street street signDespite claims that most creditors back the bankruptcy plan and that Sungard AS would emerge from the wreckage a stronger, more competitive business, the move rocked the industry. Hedge funds are not typically long-term investors causing alarm among SunGard AS employees about the company’s future. Employees fear the company will be asset-stripped and not survive, as hedge funds seek to recoup money lost on the debt haircut. Sungard AS insists that won’t happen. Sungard employs over 3,000 people according to its website.

Sungard AS’s data center model, “shared infrastructure,” of physical locations for backup IT systems, has become outdated as cloud-based infrastructure, led by Amazon Web Services, and Microsoft Azure have grown to dominate firms’ IT backup operations.

Andrew A. Stern, Chief Executive Officer, Sungard Availability Services said.

There’s no question the shift to cloud is part of what’s challenged us. But even before the cloud, by the late 2000s, “the approach the company had taken to disaster recovery really hadn’t changed in 20 years — and the world had moved on. … We had been slow in recognizing the business had to change.

Data center issuesSungard initially tried to meet rival remote-server “cloud”-based systems with its own “private cloud” solutions. But its large corporate clients by 2016 were migrating to the large, secure cloud systems maintained by Amazon, Microsoft, and other giant companies. CEO Stern added, “We suddenly found ourselves competing with much bigger environments at much greater scale.

Sungard couldn’t beat them, so it signed up as one of 130 Amazon-audited managed service partners, recruiting and customizing Amazon Web Services for corporate disaster-recovery customers, including, most recently, government agencies in England. Mr. Stern added, “But that change has taken time.

Philly.com summarizes Sungard’s history. Sungard’s lineage starts in the mainframe days. It started off as Sun Information Systems, founded in the 1970s as a backup for early data systems at oil and chemical plants run by the former Sun Oil Co. In the 1980s, founder John Ryan diversified the company, offering backup services to banks as they computerized deposit, loan, and investment records. In the tech boom of the late 1990s, publicly-traded SunGard Data Systems was worth more than Sun Oil’s parent company, Sunoco.

During this time SunGard Data acquired competing systems in the same market sector and let them continue competing for a time. In the late 1990s then-chief executive, Cristobal Conde began combining SunGard products into large groups focusing on recovery (Availability) and was using its profits to buy dozens of financial, government, and college software services across Europe and Asia, and North America.

The 2005 acquisition of SunGard Data by the buyout firms was one of the biggest deals of its kind before the 2008 financial crisis. In 2011 sales peaked at over $5 billion and employment topped 20,000.

Mainframe computerBut with its owners mostly concerned with pulling cash out of the company, it lost what its leaders admitted was a “tsunami” of corporate customer cancellations as the disaster-recovery market changed, and the company didn’t keep up. In 2011, SunGard Data sold its main college business to Virginia-based Ellucian for $1.75 billion.

In 2014 SunGard Data split in two. In 2015 the larger SunGard Data Systems Inc., with sales of $2.8 billion was sold for $5.1 billion to Florida-based Fidelity Information Services. As a standalone unit, Sungard AS struggled to gain profitability leading to the bankruptcy announcement.

rb-

Indeed the Cloud has significantly changed disaster recovery in multiple ways.

The hyperscale cloud providers like AWS and Microsoft Azure have entered the market as both competitors and partners.

Cloud disaster recovery has changed the way disaster recovery services are delivered adding flexibility and remote working.

We have seen the same thing with the demise of KMart and Sears. Sungard was still reliant on brick-and-mortar DR services.

Let’s see how many Sungard AS customers will continue to invest the DR dollars into a company whose CEO admits they “hadn’t changed in 20 years” and is willing to write off almost a billion dollars.

Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Is Smilodon Holding Back Your Career?

Is Smilodon Holding Back Your Career?In case you have not noticed the world is changing. People who know this stuff say our brains have not changed as much as our surroundings. Our brains are hardwired to keep us safe. It is called “negativity bias” which means we focus on the potential pain more than the potential good. This is why change is scary.

Is Smilodon Holding Back Your Career?These legacy fears are the result of millions of years of our ancestors being prey, not predators. Giant hyenas, cave bears, cave lions, eagles, snakes, other primates, wolves, saber-toothed cats, false saber-toothed cats, and maybe even giant, predatory kangaroos ate early humans. But in our knowledge-based world, the potential pain we expect (a tough meeting with the boss) won’t kill us– but we feel the same fear and pain as our primal ancestors did when they heard the saber-toothed cat roar.

And so we run the other way. At work, we miss more potential good because we’re hard-wired to avoid potential bad. Taking specific, intentional career risks helps us overcome our ancient hard wiring.

Percrocutidae In fact, avoiding risk is one of the most dangerous things you do for your career long term. After all, if you’re not being proactive about creating success on your terms–whatever and however that looks to you–no one is going to do it for you.

This infographic from Go Jump Las Vegas lists some of the positive influences that risk-taking has on our overall well-being. Step out of your comfort zone and start seizing amazing opportunities.

 

Risk taking

rb-

Think about your fears as the right response at the wrong time. The fear worked 50,000 years ago. It’s simply out of date. You’re using outdated software in your brain.

Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.