For a long time, the man has held us down. They used their science and medicine to tell is that ingesting coffee, or more accurately the caffeine in coffee, was bad. Times they are a changing we no longer have to justify drinking coffee to anybody. Recent research carried out by many free-thinking independent medical
With apologies to Otis Redding, Americans don’t know much about security. They don’t know much privacy, or the SPAM they took. A new Pew Research Center survey, “What the Public Knows About Cybersecurity” quizzed 1,055 adults about their understanding of concepts important to online safety and privacy. The results of the Pew survey are unsettling.
What is up with Cisco? Their fiscal results for the recently closed 2017 Q3 showed revenue of $11.9 billion, a 1% decline in revenue, compared to the same quarter last year. This is the 6th consecutive down quarter. The networking goliath also issued downward guidance for 2017 Q4. They estimated a revenue declines of 4-6% year-over
Lost in last month’s hub-bub over WannaCry ransomware was the revelation that hackers had successfully exploited the SS7 “flaw” in January 2017. In May reports surfaced that hackers were able to remotely pilfer German bank accounts by taking advantage of vulnerabilities in Signaling System 7 (SS7). SS7 is a standard that defines how to public
The attackers behind last month’s WannaCry ransomware were planning to extort $300 in Monero cryptocurrency to unlock encrypted files. Until this crisis, who had ever heard of Monero? How could you even buy Moneros to unlock your PC, if you wanted to take that chance? More people are probably aware of Bitcoin (BTC). The Visual Capitalist explains
Redmond’s Terrible, Horrible, No Good, Very Bad month continues. The WannaCry ransomware hit mostly Windows 7 machines, and now researchers from the Russian information security company Aladdin RD, recently discovered a new bug that will slow down and crash Microsoft (MSFT) Windows Vista, Windows 7 and Windows 8 PCs, but does not seem to impact Windows 10 so far.
In a throw back to the Windows 95 and 98 era Ars Technica reports that certain specially crafted filenames could make the operating system lock up or occasionally crash with a blue screen of death. Ars reports that the bug allows a malicious website to try to load an image file with the “$MFT” name in the directory path. Windows uses “$MFT” for special metadata files that are used by NTFS file system. The effected systems do not handle this directory name correctly.
The file exists in the root directory of each NTFS volume, but the NTFS driver handles it in special ways. Ars explains that it’s hidden from view and inaccessible to most software. Attempts to open the file are normally blocked, but if the filename is used as if it were a directory name—for example, trying to open the file c:\$MFT\123—then the NTFS driver takes out a lock on the file and never releases it. Every subsequent operation sits around waiting for the lock to be released. Forever. This blocks all other attempts to get access to the file system, and so every program will start to hang, rendering the machine unusable until it is rebooted.
Ars says that web pages that use the bad filename in an image source for example, will provoke the bug and make the machine stop responding. Depending on what the machine is doing concurrently, it will sometimes blue screen. Either way, you’re going to need to reboot it to recover. Some browsers will block attempts to access these local resources, but Internet Explorer, will try open the bad file.
Ars couldn’t immediately cause the same thing to occur remotely (by sending IIS a request for a bad filename), but it wouldn’t immediately surprise us if certain configurations or trickery were enough to cause the same problem.
The Verge has successfully tested the bug on a Windows 7 PC with the default Internet Explorer browser. Using a filename with “c:\$MFT\123” in a website image, their test caused a machine to slow down to the point they had to reboot to get the PC working again.
A Microsoft spokesperson told Engadget that the company is looking into the matter and will give an update as soon as it can.
“Our engineers are currently reviewing the information. Microsoft has a customer commitment to investigate reported security issues and provide updates as soon as possible.”
The Redmond boys also had to release an emergency out of band update for the Malware Protection Engine aka Windows Defender. Two Google security researchers discovered the “crazy bad” flaw. They claimed it was “the worst Windows remote code exec in recent memory.” The TechNet article says the vulnerability they patched would allow remote code execution if the Microsoft Malware Protection Engine scans a specially crafted file (CVE-2017-0290). To MSFT’s credit, they did fix the bug and release the patch with a week of being notified.
Early reports are that this bug is an attack vector. However, this is a denial of service attack that will need a reboot. This new flaw could be bundled with other more dangerous malware to force the user to reboot allowing the attack malware to get loaded.
Computer Economics says that too few organizations adequately staff the project manager function and, as a result, too many projects fall short of objectives, miss deadlines, or overrun budgets. In their report, IT Project Management Staffing Ratios (Reg. Req.), the research firm found that project managers as a percentage of the IT staff dropped slightly at the median from 4.8% in 2015 to 4.5% in 2016.
The Irvine, CA based firm speculates that there are a variety of reasons for the recent decline in the percentage of project managers. They found that like other IT functions, the staffing ratio for project managers is in flux. The percentages of staff in certain other IT job categories are growing, with a higher percentage going to application development, business analytics, and security. This, by definition, pushes down the percentage in project management.
Other reasons Computer Economics cites include the improvement in project management tools, which might allow project managers to handle more projects. It also appears a small number of companies might be abandoning the dedicated role of project manager, combining it with the role of lead developer, for example. The study also blames the growing popularity of agile development, with its focus on, also may be contributing to the decline in project management as a discrete function. However, this decline has only been recent and may not yet reflect a trend. Tom Dunlap, research director for Computer Economics said,
Despite the slight drop in the percentage of PMs, I’d be surprised if that turned into a long-term trend. With the rapidly changing nature of technology in the enterprise and the generally bad track record of IT departments getting projects in on time and on budget, I expect the percentage of PMs to go up.
Compare this data to that PMI reported in their Project Management Job Growth and Talent Gap 2017–2027 (PDF) report where they are making the case for a growing job market for PM’s. The report claims that through 2027, the global project management-oriented labor force in seven project-oriented sectors is expected to grow by 33 percent, or nearly 22 million new jobs.
Back in April, the tech sector was leaping for joy when Tesla’s stock market valuation passed Ford and GM. Rumors are that Tesla is the future of transportation and Elon Musk is the king of cars because they took more orders for cars that did not burn up or crash out of control. In 2016 Tesla delivered only 76,000 vehicles. Ford sold nearly 1 million F-Series trucks in 2016.
Despite the happy dances in Silicon Valley, which fancy’s itself as the logical successor to Detroit as the capital of American innovation new research says not so fast. The west coast upstarts — Uber, Google (GOOG), and Tesla (TLSA) — still have a lot of catching up to do when it comes to outpacing Michigan manufacturers. The Verge points us to Navigant Research, whose newly released “leaderboard” report ranks autonomous vehicle players not just on their ability to make a car drive itself, but on their ability to bring that car to the mass market.
Navigant Research scored 18 companies working on self-driving technology on 10 different criteria related to strategy, manufacturing, and execution. The report combined all that into an overall score to get a sense of who’s ahead and who’s not. General Motors (GM) and Ford (F) are currently leading the pack, with Daimler and Renault-Nissan close behind. Those four companies make up Navigant’s “leader” category. In other words, when you climb into your first self-driving car in 2021, it will almost certainly be built by one of those four companies.
Most everyone else is in the “contender” category. This includes car companies like BMW, PSA, Hyundai, Toyota, Tesla, and Volkswagen; suppliers like Delphi and ZF; and tech firms like Alphabet’s Waymo. Further down the list, in the “challengers” category, are companies like Honda, nuTonomy, Baidu, and Uber.
Sam Abuelsamid, a senior research analyst at Navigant and one of the authors of the report, told the Verge the reason Detroit beating Silicon Valley so badly in this all-too-crucial race to get autonomous vehicles on the road is because of experience. He says, Silicon Valley, “ …. will have to do deals with someone to get actual vehicles.”
Alphabet’s Waymo, scores top marks for technology but drags in the production strategy and sales, marketing, and distribution buckets. The company plans to work with legacy automakers to put its tech in cars, but has not yet struck any major deals. Mr. Abuelsamid detailed on an email with the Verge that Waymo is in the best position of the contenders.
They have almost every piece of this—except the product strategy … Waymo has what is arguably the best technology right now, although they probably aren’t that far ahead of the leading [original equipment manufacturers] but they will have to do deals with someone to get actual vehicles”
Despite Uber’s high-profile, a recent study showed that only 15% of U.S. consumers have tried a ride-hailing app like Uber. Uber also has a safety problem – Uber drivers have been charged with murder and violent crimes against their customers. In the Navigant research, Uber wallows near last place thanks to low grades for distribution, product portfolio, and staying power—and because makes Uber makes neither cars nor money. In fact, its key strength—that it already operates a global fleet of shared vehicles—may not be enough here. “It’s a lot easier for the company that actually has the infrastructure to create vehicles to recreate what Uber’s done, than the other way around,” Mr. Abuelsamid says.
The Navigant analyst explained scale matters in the auto industry.
All the little [Silicon Valley] startups may have some interesting ideas, but they don’t have the resources to produce something sufficiently robust to be commercially viable. If they have something good to offer, their best bet is an acquisition
The “legacy automakers” have engaged in mergers and acquisitions and early maneuvering in the autonomous vehicle arena as Mr. Abuelsamid stated. The report predicts that big companies will buy little startups to leverage their technology and expertise to round out the much larger-scale enterprise of developing, testing, validating, producing, and distributing self-driving cars.
Wired says Ford and GM both score in the low to mid 80s on the technology front; it’s their old-school skills that float them to first and second place. They’ve each spent more than a century developing, testing, producing, marketing, distributing, and selling cars. Plus, each has made strategic moves to bolster weak points.
GM recently acquired Cruise Automation, a San Francisco-based autonomous vehicle technology maker in a deal valued at more than $1 billion between. GM said the acquisition will allow it to “accelerate” its autonomous vehicle development efforts.
Fiat Chrysler has partnered with Alphabet to jointly test autonomous technology in Pacifica minivans, and Alphabet is opening a 53,000 square foot self-driving car development center near Detroit in Novi, MI.
GM has invested $500 million in ride-sharing provider Lyft to beef up its ridesharing service. In the “long-term strategic alliance” the companies will work on what they call “on-demand autonomous vehicles.” For now, the deal means GM cars will be the “preferred” vehicle used by Lyft drivers who rent their cars in various U.S. cities. Those vehicles will tap into GM’s OnStar service, while GM and Lyft promised “personalized mobility services and experiences,” but did not elaborate.
Ford, meanwhile, recently announced a $75 million investment in LiDAR maker Velodyne, to “quickly mass-produce a more affordable automotive LiDAR sensor” so the company can launch a fleet of self-driving ride-sharing cars by 2021
Ford has also acquired SAIPS, an Israeli machine learning firm to further strengthen its expertise in artificial intelligence and computer vision. SAIPS has developed algorithmic solutions in image and video processing, deep learning, signal processing and classification. This expertise will help Ford autonomous vehicles learn and adapt to the surroundings of their environment
Ford announced that it would take part in a $6.6 million seed funding round for Civil Maps to further develop high-resolution 3D mapping capabilities. This provides Ford another way to develop high-resolution 3D maps of autonomous vehicle environments. Ford has also agreed to acquire Chariot, an on-demand shuttle service based in San Francisco.
Mr. Abuelsamid predicts that early on, you probably won’t be buying a self-driving car at a dealership, but rather riding in one that you hail through an app-based service like Uber or Lyft. These vehicles will be part of a fleet owned by a manufacturer, like Ford or GM. Fleet ownership will help manufacturers manage the issues self-driving vehicles are likely to encounter early on, like insurance for the inevitable accidents. Navigant’s Abuelsamid says
With all of that in mind, it’s far easier for a manufacturer to replicate the sort of logistics platform that Uber or Lyft have than it is for those companies to invest in and create the development, manufacturing, and service infrastructure that [original equipment manufacturers] have
Mr. Abuelsamid noted that Tesla ranked pretty far down the “contender” because Elon Musk’s company is “lacking in quality, distribution, financial stability and their [Autopilot] 2.0 hardware will never be more than limited Level 4-capable (PDF) at best.” In other words, Musk would be advised not to start gloating about his company being valued higher than the OG’s Ford and GM quite yet.
- Wall Street has lost its mind when it comes to Ford (F) (businessinsider.com)
What time is it? If you looked at the lower right corner of your Windows PC screen, you know what time it is. That is good enough for most people, but followers of the Bach Seat want to know more. How does Microsoft know that time it is? Microsoft and everybody else uses Internet Engineering Task Force (IETF) RFC 7822 standard protocol called Network Time Protocol (NTP).
NTP is one of the oldest Internet protocols still in use. NTP was designed by UMich alum David Mills at the University of Delaware. NTP can maintain time to within tens of milliseconds over the public Internet, and better than one millisecond accuracy on a LAN. Like many other things in the network world, NTP is set up as a hierarchy. At the top of the tree are “Atomic Clocks” (Stratum 0). Corporations, governments and the military run atomic clocks.
Atomic clocks are high-precision timekeeping devices which use the element cesium, which has a frequency of 9,192,631,770 Hertz. That means it “oscillates” a little over nine billion times a second. Knowing the oscillation frequency and then measuring it in a device creates an incredibly accurate timekeeping mechanism. Atomic clocks generate a very accurate interrupt and timestamp on a connected Stratum 1 computer. Stratum 0 devices are also known as reference clocks.
Stratum 1 – These are computers attached to stratum 0 devices. Stratum 1 servers are also called “primary time servers”.
Stratum 2 – These are computers that synchronize over a network with stratum 1 servers. Stratum 2 computers may also peer with other stratum 2 computers to offer more stable and robust time for all devices in the peer group.
Stratum 3 computers synchronize with stratum 2 servers. They use the same rules as stratum 2, and can themselves act as servers for stratum 4 computers, and so on.
Once synchronized, with a stratum 1, 2 or 3 server, the client updates the clock about once every 10 minutes, usually requiring only a single message exchange. The NTP process uses User Datagram Protocol port 123. The NTP timestamp message is 64-bits and consist of a 32-bit part for seconds and a 32-bit part for fractional second. 64-bits gives NTP a time scale of 232 seconds (136 years) and a theoretical resolution of 2?32 seconds (233 picoseconds). NTP uses an epoch of January 1, 1900 so the first roll over will be on February 7, 2036.
Microsoft (MSFT) has a mixed history of complying with NTP. All Microsoft Windows versions since Windows 2000 include the Windows Time service (“W32Time”) which was originally implemented to support the Kerberos version 5 authentication protocol. It required time to be within 5 minutes of the correct value to prevent replay attacks. The NTP version in Windows 2000 and XP violates several aspects of the NTP standard. Beginning with Windows Server 2003 and Vista, MSFT’s NTP which was reliable to 2 seconds. Windows Server 2016 can now support 1ms time accuracy.
In 2014 a new NTP client, ntimed, was started. As of May 2017, no official release was done yet, but ntimed can synchronize clocks reliably under Debian and FreeBSD, but has not been ported to Windows or Apple (AAPL) macOS.
Accurate time across a network is important for many reasons; discrepancies of even fractions of a second can cause problems. For example:
- Distributed procedures depend on coordinated times to make sure proper sequences are followed.
- Authentication protocols and other security mechanisms depend on consistent timekeeping across the network.
- File-system updates carried out by a number of computers depend on synchronized clock times.
- Network acceleration and network management systems also rely on the accuracy of timestamps to measure performance and troubleshoot problems.
- Each individual blockchain includes a timestamp representing the approximate time the block was created.
NTP has known vulnerabilities. The protocol can be exploited and used in distributed denial of service (DDoS) attacks for two reasons: First, it will reply to a packet with a spoofed source IP address; second, at least one of its built-in commands will send a long reply to a short request.
More vulnerabilities were recently discovered in NTP. SearchSecurity.com reports that security researcher Magnus Stubman discovered the vulnerability and, instead of going public, took the mature route and privately informed the community of his findings. Mr. Stubman wrote that the vulnerability he discovered could allow unauthenticated users to crash NTPF with a single malformed UDP packet, which will cause a null point dereference. The article explains this means that an attacker could be able to craft a special UDP packet which targets NTP, resulting in an exception bypass that can crash the process. A patch to remediate specific vulnerability — named NTP 4.2.8p9 — was released by the Network Time Foundation Project .
This is a Windows only vulnerability at this time. The author urges anyone running the NTP daemon on a Windows systems to patch it as soon as possible. This particular DoS attack against NTP could incapacitate a time-server and cause havoc in the network. The easiest fix is to apply the NTP patch the article states.
NTP is important to your network and patching and protecting it should be a priority. The threat to your environment is real. If NTP is not patched, an attacker could take advantage of the chaos created by this vulnerability to hide their tracks since timestamps on files and in logs won’t match.
Way back in the day, when I was a network administrator, I inherited a network where a directory services container was frozen. Seems that time had never been properly set up on the server holding the replica and as time passed, the server time drifted away from network time and at some point we could not make changes or force a replica update. That meant a late night call to professional services to kill the locked objects and then apply DSRepair –xkz (I think) and then re-install an R/O replica.
- A ‘leap second’ will make 2016 a little longer (businessinsider.com)
Ralph Bach has been in IT for a while and has blogged from his Bach Seat about IT, careers and anything else that catches his attention since 2005. You can follow me at Facebook and Twitter. Email the Bach Seat here.