Tag Archive for Cloud storage

It Is World Data Backup Day 2021

It Is World Data Backup Day 2021The tenth annual World data backup day is March 31, 2021. World data backup day is a time to remember to backup the data on your computer, your phone and other mobile devices. Data backup is a not-so-hard way to avoid a disaster because your chances of losing your data are pretty good.

Consider the following:

  • 30% of people have never backed up
  • 113 phones lost or stolen every minute
  • 1 in 10 computers are infected with a virus every month
  • 31% of PC users have lost all of their files due to events beyond their control
  • 140,000 hard drives crash in the US every week
  • 60% of companies that lose their data will shut down with 6 months of the disaster

World backup day


Your data is worth more than your devices

Hardware is cheap and getting cheaper. What is the value of the new business plan you spent three months writing? The music and movies you have on your devices? The cute video of your kid’s trip to the beach or your puppy being a goof? You can get a new computer or phone, but you cant replace those important files without a backup.

Why you should have a data backup plan

There are several scenarios that could take place where having a backup of your data would be useful:

  • Your phone gets stolen, and you lose all your pictures and videos.
  • An external hard drive crashes, deleting your home videos.
  • You forget your laptop in a cafe and you’ve lost all your homework.
  • A virus holds your data hostage until you pay to remove the restraints.
  • You accidentally delete something important,

What to do?

backup your dataThe advantage of having your important data backed up off-site, away from your home or office, is that it’s safe from theft, fire, and other local disasters. When you backup your data, you’re making a second copy of files you don’t want to lose. Should something happen to the originals, you can restore the data backups to your computer or mobile device with a backup.

Technically, a backup just refers to any piece of data that exists in two places. The primary purpose of a data backup is to have a recovery plan should the primary data become inaccessible. It is common to keep backups offsite like online or, at the very least, on a second hard drive, even another internal one.

Your data backup options

There a 2 types of cloud services to hold you data backups. The first is a cloud storage service for keeping your data safely backed up online. A cloud storage service a place to selectively upload important files that you need to keep off of your physical device.Your data backup options

If you are a Microsoft 365 customer – OneDrive cloud back up is included in most plans.

If you prefer Google, Google Drive is a cloud backup option to investigate.

iCloud is cloud storage for Apple devices.

There are lots of other cloud storage services to pick from.

Some argue that using these services gives the tech-titans more access to your data. If that concerns you there is a second option.  Cloud backup services let you backup data automatically and on a schedule. There are many Cloud Backup services to chose from as well.

encryptionWhen backing up to the cloud be sure you understand level of encryption they offer. When you encrypt data, you encode it so only authorized people can read it. It is up to you to keep your backup secure. Use a strong password and choose the 448-bit option, the maximum encryption offered by many providers. It would take a computer millions of years to crack the encryption and gain access to your data.  

Don’t forget to test your data back up

Remember that you haven’t really backed anything up unless you can restore it.

Many people are unable to restore their data backup because they forgot or lost their decryption password – Keep it somewhere secure – But not in your back up. Or they never did a practice restore so they simply weren’t practiced enough in using their tool to use it reliably – when the pressure was on.

rb-

Whether to a USB drive, an external drive, the cloud or a private server, backup all that important data somewhere safe. Do this often.

Treat restoring data back ups like a fire drill – practice being safe  before the real thing happens and you aren’t fighting against both fear and unfamiliarity at the same time.

Stay safe out there !

Related article

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers and anything else that catches his attention since 2005. You can follow him at LinkedInFacebook and Twitter. Email the Bach Seat here.

Artificial Intelligence on the Throne

Artificial Intelligence  on the ThroneThe Internet of Things (IoT) is covering the world with all kinds of devices for the home and industry. Tech prognosticator IDC estimates that by 2025 there will be 41.6 billion IoT devices. The market research firm predicts the IoT devices will dump 79.4 zettabytes (ZB) of data. One class of IoT device for the home has gotten a major upgrade from California’s Stanford. Stanford University medical researchers have created a smart toilet by adding artificial intelligence to the throne. Before Stanford, the smart toilet was often the butt of jokes. The “smart toilet” offered ambient colored lighting, wireless Bluetooth music sync, heated seats, foot warmers, and automatic opening and closing lids. All nice but not really smart. The Stanford Precision Health Toilet (advanced Smart Toilet for healthcare) is really smart it can diagnose diseases. 

Artificial intelligence on the toilet

The Stanford Precision Health Toilet project led by Lead author, Seung-min Park, Ph.D., published A mountable toilet system for personalized health monitoring via the analysis of excreta.” In the journal Nature Biomedical Engineering, they describe a toilet designed to detect early warning signs of cancer and other diseases. The Stanford team believes it will be useful for people at an increased risk of developing certain health issues. Dr. Gambhir – a Ph.D., Stanford professor, chair of radiology, and the senior author of the research paper says that currently, the toilet can measure 10 different biomarkers. The device is fitted inside a regular toilet bowl and is connected to an app for evaluation. Dr. Gambhir envisions it as part of an average home bathroom. The sensors would be an add-on that’s easily integrated into “any old porcelain bowl.” Stanford Precision Health Toilet The extra-smart toilet uses cameras and test strips to collect number one and number two samples. It then analyzes both your pee and poo with artificial intelligence to generate diagnosesa trend in the medical industry. Stanford News says the smart toilet’s algorithms “can distinguish normal ‘urodynamics.’ Urodynamics is the flow rate, stream time, and total volume, among other parameters of urine. The Smart toilet can also check “stool consistencies from those that are unhealthy.analyze white blood cell countChanges in urine can reveal multiple disorders. The dipsticks can be used to analyze white blood cell count, consistent blood contamination. Certain levels of proteins, that can signify bad things. Including a spectrum of diseases, including infection, irritable bowel syndrome, kidney failure, bladder cancer, and prostate cancer.

A very unique biometric factor

The toilet’s built-in identification system uses fingerprints and analprints to identify users in order to match users to their data. Apparently, analprints turn out to be unique biometric factor like fingerprints or iris prints. Professor Gambhir said, “We know it seems weird, but as it turns out, your anal print is unique.” Stanford says no human will see you analprint biometric data. If the artificial intelligence detects something questionable the smart toilets’ app would alert the user’s healthcare team to conduct a full diagnosis and further tests. researchers are planning upgradesThe researchers are planning upgrades to the Precision Health Toilet. Mr. Park told The Verge the upcoming number two version of the toilet will help detect tumor DNA and viral RNA to help them track the spread of diseases like COVID-19. Dr. Gambhir told NakedSecurity his team is working to customize the toilet’s tests to fit a user’s individual needs. For example, a diabetic’s smart toilet could monitor glucose in the urine. Or if a person with a family history of bladder or kidney cancer could benefit by having a smart toilet that monitors for blood. The Stanford researchers tested the toilet and more than half of their pilot test subjects were comfortable using the extra-smart toilet. 37% were “somewhat comfortable.” 15% were “very comfortable” with the idea of “baring it all in the name of precision health.rb- Salvador DaliUsing analprints to match your poo with you is based on “work” by 20th-century surrealist painter Salvador Dali. Stanford’s Gambhir pointed out in an interview with Bioengineering that Dali studied anal creases for his unconventional erotic art (NSFW). Dr. Gambhir’s assurances that the health data would be stored with “privacy protections” in “secure, cloud-based systems.” Followers of the Bach Seat know that cloud-based systems is also known as “somebody else’s computer.” That sounds like a bad idea. We know cloud-based storage can be very leaky. And healthcare systems have come under increased attack during the COVID pandemic. The Feds could track people around coming and goingAnother problem with the ultra-smart toilet. When the FBI gets hold of this data, they could literally be up in everybody’s business. The Feds could track people around the world coming and going by adding analprints to their massive facial recognition surveillance database. Dr. Gambhir is quoted by NakedSecurity, 

We have taken rigorous steps to ensure that all the information is de-identified when it’s sent to the cloud and that the information – when sent to health care providers – is protected under [HIPAA],… 

NakedSecurity points out that time and time again Big Data can be dissected, compared, and contrasted to draw inferences about individuals. In other words, it’s not hard to re-identify people from anonymized records, be they records pertaining to location tracking, faceprints, or now-anuses. Dr. Gambhir reminds us all that while the Stanford Precision Health ultra-smart Toile has clear benefits as a diagnostic tool, it should not be a replacement for a doctor.

Stay safe out there!

Related article   Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Veeam Backup Bought

IVeeam Backup Boughtn a move to improve its U.S. market share, Veeam Software has agreed to be bought by private equity firm Insight Partners. The deal valued a $5 billion, is Insight’s second major acquisition of 2020. Veeam is cloud-focused data protection, backup, and disaster recovery software company.

Backup, and disaster recovery company.

Veeam logoVeeam was founded in 2006 and owned by Russians Andrei Baronov and Ratmir Timashev. The firm has grown to 365,000 customers worldwide and annual sales of more than $1 billion by capitalizing on the VMware-led server virtualization boom. As part of the take-over, the founders will leave the firm and Veeam will become a U.S. company based in New York. The company had been based in Baar, Switzerland.

Veeam’s products include backup solutions, cloud security offerings, and cloud data management. Veeam’s cloud data management portfolio consists of Veeam Backup for Amazon Web Services (AWS), Veeam Backup for Microsoft Office 365, Veeam Universal License (VUL), and Veeam Backup for Microsoft Azure.

Private equity plans

Veeam's products include backup solutionsThe private equity company has a three-stage program to help the companies in which it invests grow, including the Startup stage of focused on companies looking for early growth in their markets, the ScaleUp stage for companies with strong businesses, and the Corporate stage for companies ready for IPOs or other exits, Mike Triplett, a managing director of Insight Partners and new Veeam board member told CRN.

ZDNet says Veeam is in the second “ScaleUp” stage as customers are now also utilizing hybrid cloud setups with AWS, Azure, IBM, and Google, the firm’s “Act II” is to capitalize on a growing need for cloud data management across these environments. Mr. Triplett claims Insight Partners can bring the right resources to bear to move Veeam from the “ScaleUp” stage to the “Corporate” stage.

Other Insight Partners investments

Insight Partners has invested heavily in cybersecurity and MSP-friendly technology markets.Insight Partners also owns other data protection companies — including Unitrends and Spanning. In addition to data protection, the VC has invested heavily in cybersecurity and MSP-friendly technology markets. Other key Insight Partners investments include:

rb-

private equity firms and hedge funds have a bad reputationExpect to see lots of PE activity this year (decade?). Channele2e reports that private equity investors are sitting on a record $1.5 trillion in cash. This kind of war chest is no wonder private equity firms and hedge funds have a bad reputation. VC firms have a history of acquiring businesses, loading them up with debt, and cutting staff to boost profits. The most recent examples being Sears and Toys R Us. Channele2e points out that U.S. presidential candidate Elizabeth Warren is calling for new private equity restraints to combat “legalized looting.”

I have seen that Veeam has a Russian problem. Back in the day when I shared technical services, I tried to replace an HP LTO2 tape library (PDF) with a Veeam solution and the powers-that-were did not want Veeam  – we spent a lot more money to maintain the old HP LTO2 technology.

Related article

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Black Hole Data

Black Hole Data The first image of a black hole was published on April 10, 2019. The black hole, M87* at the center of the Messier 87 galaxy is located 53 million light-years away from Earth. NASA says a black hole is an extremely dense object from which no light can escape. Anything that comes within a black hole’s “event horizon,” will be consumed, because of the black hole’s unimaginably strong gravity.

the first image of a black hole

By its very nature, a black hole cannot be seen, the bright ring in the picture is the event horizon, the point where an object approaching a black hole is unable to escape its gravitational pull. Objects that pass into the event horizon go through spaghettification, a process, first described by Stephen Hawking, where gravitational forces stretch the object out like a piece of pasta. The M87* image shows a silhouette of the black hole against the glow of the event horizon captured by researchers at the Event Horizon Telescope (EHT).

APEX Atacama Pathfinder Experiment antenna.The EHT is the brainchild of Shep Doeleman, the director of EHT and astronomer at the Harvard-Smithsonian Center for Astrophysics. It is a virtual global array of eight ground-based radio telescopes. The EHT captured around 3.5 PB of data for the black hole image in April 2017. It then took two years to correlate the data to form the image. The EHT team not only had to figure out intergalactic science but also massive information technology problems. The researchers had to solve IT problems pretty typical for enterprise IT professionals, only bigger.

According to an article at SearchDataBackup each EHT telescope can record data at a rate of 64 Gbps, and each observation period can last more than 10 hours. The author calculated that each site generated around half a petabyte of data per run. The distributed locations included volcanoes in Hawaii and Mexico, mountains in Arizona and the Spanish Sierra Nevada, the Chilean Atacama Desert, and Antarctica. The sites were kept in sync using precise atomic clocks and GPS systems to carefully time the observations.

The data from each telescope was recorded at 16 Gbps and distributed among a total of 32 hard disk drives grouped into 4 modules of 8 disks each. The EHT can record a total rate at each site of 64 Gbps by using 4 units in tandem.

Sites making up the virtual Event Horizon Telescope.

 

One problem EHT ran into was the failure rate of traditional hard drives in the extreme telescope locations. ComputerWorld reports that 28 of 32 conventional hard drives failed at the Sierra Negra telescope, on the top of an extinct volcano in Mexico.

WD 10TB helium disk driveSearchDataBackup says the solution was helium hard drives. The hermetically sealed helium drives are self-contained environments, so they could survive the extreme environments in which EHT’s telescopes operated. EHT first deployed helium hard drives in 2015. EHT data scientist Lindy Blackburn told SearchDataBackup that EHT now uses about 1,000 helium drives with up to 10 TB of capacity from Western Digital, Seagate, and Toshiba. He told SearchDataBackup,

The move to helium-sealed drives was a major advancement for the EHT … Not only do they perform well at altitude and run cooler, but there have been very few failures over the years. For example, no drives failed during the EHT’s 2017 observing campaign.

The amount of data collected by EHT was too much to send over the Internet so the researchers went old-school and used FedEx sneakernet style to send the data to be processed. Geoffrey Bower an astronomer in Hawaii told ScienceNews that mailing the disks is always a little nerve-wracking. So far, there have been no major shipping mishaps. But the cost and logistics involved with tracking and maintaining a multi-petabyte disk inventory is also challenging. Therefore, EHT is always on the lookout for another method to move petabyte-scale data.

Cloud computing

`

 SearchDataBackup points out that normally the cloud would be a good option for long-term storage of unifying data sourced from multiple, globally distributed endpoints. However, Mr. Blackburn told them the cloud was not a cold storage option for the project. He said the high recording speed and the sheer volume of data captured made it impractical to upload to a cloud. He explained, “At the moment, parallel recording to massive banks of hard drives, then physically shipping those drives somewhere is still the most practical solution.”

The data collected on the helium hard disk drive packs were processed by a grid computer made of about 800 CPUs all connected through a 40Gbps network at the MIT Haystack Observatory MA, and the Max Planck Institute for Radio Astronomy in Germany.

Katie Bouman is the MIT student who developed the algorithm that pieced together the data from the EHT with disk drives

Geoff Crew, co-leader of the EHT correlation working group at Haystack Observatory told SearchDataBackup It is impractical to use the cloud for computing. Mr. Crew said;

Cloud computing does not make sense today, as the volume of data would be prohibitively expensive to load into the cloud and, once there, might not be physically placed to be efficiently computed.

The EHT scientists built algorithms that converted sparse data into images. They developed a way to cut the number of possible images by sorting out which results were physically plausible and which were wildly unlikely making it less hard to create the images.

The Haystack VLBI Correlator grid computer at the MIT Haystack Observator

Converting sparse data into images matters beyond astronomy. Mr. Blackburn told 538 the problem comes up in other areas as well; it occurs in medical imaging when doctors use MRIs to convert radio waves into pictures of your body. It’s also a key part of self-driving cars, which rely on computer visualization to “see” everything from potholes to people.

data protectionJust like any enterprise, EHT had to find a workable method of data protection. That includes deciding what won’t be protected. EHT has not found a cost-effective way to replicate or protect the raw radio signal data from the telescope sites. However, once the data has been processed and reduced to tens of petabytes it is backed up on-site on several different RAID systems and on Google Cloud Storage. Mr. Crew told SearchDataBackup;

The reduced data is archived and replicated to a number of internal EHT sites for the use of the team, and eventually, it will all be publicly archived. The raw data isn’t saved; we presently do not have any efficient and cost-effective means to back it up.

Mr. Blackburn said the raw data isn’t worth backing up. Because of the complexity of protecting such a large amount of data, it would be simpler to run another observation and gather a new set of data. Mr. Blackburn said; “Backing up original raw data to preserve every bit is not so important.”

Mr. Blackburn said he can’t seriously consider implementing a backup process unless it is “sufficiently straightforward and economical.

Instead, he said he’s looking at where technology might be in the next five or 10 years to find the best method to handle petabyte-scale raw data from the telescopes. Mr. Blackburn told SearchDataBackup;

Right now, it is not clear if that will be continuing to record to hard drives and using special-purpose correlation clusters, recording to hard drives and getting the data as quickly as possible to the cloud, or if SSD or even tape technology will progress to a point to where they are competitive in both cost and speed to hard disks

rb-

The image of the black hole validated Einstein’s general theory of relativity and proves that enterprise-class IT can solve intergalactic problems.

The EHT team had to figure out how to save, move and backup massive quantities of data and of course do more with less. EHT’s Geoff Crew summed up the problem most IT pros have; “Most of our challenges are related to insufficient money, rather than technical hurdles.”

Related articles
  • Trolls hijacked a scientist’s image to attack Katie Bouman. They picked the wrong astrophysicist. (MSN)

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

Cloud Storage, Back-Up Bust

Cloud Storage, Back-Up BustIt is heady times fans of cloud storage these days. In case you haven’t been paying attention, the cloud file storage corner of the IT universe has heated up for the past couple of months. Dropbox, Microsoft (MSFT), Google (GOOG) and Apple (AAPL) have been leapfrogging each other in an apparent effort to buy mind-share.

Dropbox recently announced that its Dropbox Pro plan will now offer 1TB of capacity for $9.99 a month, or $99 for a full-year subscription. Paul Mah at FierceCIO says this is a significant reduction, especially when recent monthly Dropbox Pro storage prices were:

  • Dropbox logo$9.99 for 100GB,
  • $19.99 for 200GB, and
  • $49.99 for 500GB of storage.

Mr. Mah, says the latest move by Dropbox allows them to stay on par with the latest price cuts from Apple iCloud in September, Google Drive in June, and Microsoft OneDrive in May.

In September Apple updated its porous iCloud storage plans. CNET says the basic 5 gigabytes of storage remains free, though prices for paid tiers were significantly reduced and larger storage options were made available. CNET says the new monthly iCloud storage costs are:

  • Free for 5GB,
  • $0.99 for 20GB,
  • $3.99 for 200GB,
  • $9.99 for 500GB and
  • $19.99 1TB

Previously, 10GB of storage would have cost $20 per year, 20GB for $40 per year, and 50GB for $100 per year.

At Microsoft, the cloud-based file storage game also changed. According to Redmond Magazine, the improvements include a new file load limit (10GB max), an easier way to share links to OneDrive folders, and support for folder drag-and-drop operations using the Google Chrome browser. Microsoft is also working on speeding up the synchronization of files with OneDrive. The updated per month prices for OneDrive are:

  • Microsoft One DriveFree for 15GB,
  • $1.99 for 100GB,
  • $3.99 for 200GB,
  • $5.99 1TB

In an attempt to trump MSFT, Google released Google Drive for Work, a paid service targeted at business users and priced at $10 per user per month. FierceCIO noted that the new service offers unlimited storage, the ability to upload files of up to 5TB in size, and access to productivity apps such as Docs, Sheets, Slides, Hangouts, and Sites. Importantly, Google also announced that files uploaded to Google Drive can be encrypted, and will stay that way while in transit or when at rest on its servers. Here are the current prices per month for Google Drive space pace according to CNET:

  • Google DriveFree for 15GB,
  • $1.99 for 100GB,
  • $9.99 for 1TB,
  • $99.99 10TB,
  • $199.99 for 20TB and
  • $299.99 for 30TB.

Mr. Mah argues that price drops are good news for consumers. The extra space would certainly be useful for users who rely on it for long-term file archives or backing up large local files. The author correctly argues that 1TB of online storage does not deliver the same value to business users. The reason is simple: cloud storage is a terrible backup solution for large volumes of data, especially if you need to get it back quickly.

Mr, Mah observed that cloud storage vendors do not share information about any guaranteed uploading or downloading speeds when using them. This is noteworthy considering that 1TB of files can take a really long time to transfer over the Internet.

He explains that downloading 1TB worth of files with zero data overhead–which is impossible, across a reasonable 10Mbps broadband connection would take over 222 hours, or close to 10 days of continuous downloading. You can be assured that real-life conditions on your broadband connection would likely mean that this is at least doubled or even tripled.

And that’s assuming that the cloud service provider isn’t experiencing any congestion on its end, which is not something that cloud vendors are offering any guarantees on. Notwithstanding that, you can check out this nifty online calculator.

So while there is no question about the value of cloud storage for data synchronization across multiple devices, it is important for businesses to understand that the cloud just isn’t ideal for data backup. Mr. Mah concludes that users should use their 1TB of cloud space for all its worth, but users and firms need to do proper local backups for important files, as well as those that need to be restored quickly.

Related articles

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.