Discover how mastering email communication can boost business efficiency, avoid common pitfalls, and ensure secure, respectful online interactions.
Turkey Revenge
The turkeys are pissed this Thanksgiving they are seeking revenge.
Germs Infest 60% of Americas Phones
60% of Americans sleep with their phones, harboring germs. Cleaning regularly with UV sanitizer or alcohol wipes can help keep your phone and bed germ-free.
Smartphone Sanitizing: A Practical Guide
Securely erase personal data from your old smartphone before recycling. Protect your identity from hackers—easy steps to follow.
Why Soft Skills Matter in Today’s Job Market
Boost your career with essential soft skills like communication, teamwork, and emotional intelligence. Learn why they’re crucial for workplace success.
Protect Yourself: Avoiding Election Season Scams
As we approach election day, we have all received more requests to sign petitions, fill out polls and surveys, and donate to causes and campaigns. Scammers know that political campaigns often ask for your information and money. Fraudsters are taking advantage of this avalanche of election messaging to pose as campaign workers. Be on guard; participating in the democratic process shouldn’t compromise your identity. Try these tips for performing your civic duty this November without getting duped by a scam.
Do your election research.
Do your election research to protect yourself from election-related scams. Scammers target elections as opportunities to take advantage of people. Additionally, fraudsters may call or email you, pretending to raise funds for a specific group or candidate. Therefore, before you donate, ensure you’re contributing to a legitimate organization.
Furthermore, Take your time. Be wary of any caller or message using pressure tactics to raise funds.
Do your research.
- First, check the Federal Election Commission’s official political action committees list to confirm the PAC’s legitimacy.
- Second, you’re giving to a third-party non-profit, consult resources to verify the causes’ status. Sites that help confirm a non-profit status include;
After researching and deciding to donate online, ensure the website is legitimate and the URL starts with “HTTPS.” Scammers can create copycat sites that look like the real thing. Alternatively, the safest way to donate is at a local campaign office.
Be stingy with your personal information.
Security experts say that identity thieves have used election or voter registration scams to steal personal details. So, you should think twice about signing a petition at the farmer’s market or clicking on that link in the text urging you to register to vote. Suzanne Sando, Senior Analyst at Javelin Strategy & Research, warns, “Scam election-themed texts sneak in between legitimate communications. They take advantage of your sense of urgency and passion about the election, especially since the last few elections have been so emotionally charged.”
If you fill out a voter petition or survey, be picky about what you share.
- Never give out your Social Security or driver’s license number.
- Don’t be afraid to ask if specific fields are required.
• Be wary when a campaign worker or pollster offers you a gift card for filling out a political survey. Political campaigns don’t offer prizes or rewards.
• Never give out your financial information, such as credit card numbers or bank account details, when participating in a poll or survey. Pollsters may ask for demographic or political affiliation information but should never need more.
AI impact on the election
Since 2022, there has been an explosion in the use of artificial intelligence (AI) to generate robocalls. Every person in the U.S. is estimated to get 161 robocalls per year. A robocall is an automated phone call that delivers a pre-recorded message. They typically rely on a computerized autodialer, a system that can place multiple calls delivering the same message simultaneously. It’s a robot making a phone call, hence the name “robocall.”
AI can be exploited to create sophisticated robocalls that impersonate credible sources, manipulate voter sentiment, or spread misinformation. In response to the spread of this type of fraud, the Federal Communications Commission (FCC) recently made AI-generated calls illegal. It’s essential to be skeptical of any unexpected calls you receive from someone claiming to be a particular political candidate or celebrity, such as Tom Hanks, Taylor Swift, President Biden, or Elon Musk.
Election call spoofing
Another way scammers try to get your information is through spoofed calls. The caller ID on your mobile may say the call is from a campaign or organization’s office, but this can be faked. Spoofing occurs when a person hides behind a phone number that’s not assigned to the phone they’re calling from.
Social Media
Always perform these steps when interacting with a candidate or cause on social media. First, before clicking a link in an election-themed social post, give it a once-over for phishing hallmarks. Phishing hallmarks include blurry images and typos. Hover your mouse over any links before clicking on them. Next, be wary before sharing or re-posting election-related content you find online. AI is increasingly being used to spread election disinformation and trick voters. Do your research before sharing anything you find online. Finally, trust your gut. If they seem to be asking for too much information, do not share your information.
rb-
If you practice good cyber hygiene, the issues surrounding election-time scams are manageable. It is important to remember several facts about voting. It matters; this is your chance to support and voice your opinion.
You must be registered to vote. If you register to vote publicly, opt to hand-deliver or mail in the required form rather than leave it behind. Better yet, visit Vote.gov or your local election office to register.
- You can only submit your vote in the ballot box or via an absentee ballot.
- Ignore claims that you can register to vote or cast your ballot by phone, text, or email in exchange for sharing your personal information.
Related article
Ralph Bach has been in IT for a while and has blogged from the Bach Seat about IT, careers, and anything else that has caught my attention since 2005. You can follow me on Facebook or Mastodon. Email the Bach Seat here.
Securing Your Data from LinkedIn AI Models
In September 2024, LinkedIn started using your data to train its artificial intelligence (AI). LinkedIn’s AI is a large language model (LLM) designed to recognize patterns and connections within data. LinkedIn’s generative AI is trained on huge sets of data. The data is often scraped from publicly available Internet resources. Think news articles, academic research, government reports and public information, for example. It need the data to learn how to improve grammar, vocabulary, and context. The more diverse and higher-quality data it collects, the better its predictions and accuracy.
To improve its bot, LinkedIn collects data when you interact with its generative AI, whether composing a post, changing your preferences, or providing feedback. LinkedIn also gathers data when you engage with other people’s posts on LinkedIn. The artificial intelligence training option is turned ‘on’ by default.
Fortunately, LinkedIn added an opt-out option for the LLM training. Otherwise, beginning November 20, it would start using your data for AI training. According to LinkedIn’s FAQ page,
“opting out means that LinkedIn and its affiliates will not use your personal data or content on LinkedIn to train models going forward, but it does not affect training that has already taken place.”
Microsoft owns LinkedIn, which means that LinkedIn “affiliates” refer to companies owned by Microsoft. Microsoft has a stake in 289 companies, including five artificial intelligence firms. Therefore, based on LinkedIn’s FAQ statement, LinkedIn’s LLM and its 289 affiliates use your data.
One of the primary concerns about LinkedIn using your data for AI training is the potential invasion of your privacy. These models often produce outputs based on the data provided during training. A generative artificial intelligence model will show you rehashed or repurposed versions of its training data as output. Rachel Tobac, CEO of SocialProof Security, told Technopedia,
“It’s likely that elements of your writing, photos, or videos will be merged with other people’s content to build AI outputs.”
Stop LinkedIn from using your data to train its AI
To stop LinkedIn from using your data to train its AI models, follow these steps:
- Log in to your LinkedIn account.
- Click on your profile picture in the header menu.
- Click the “Me” option in the top bar
- Choose “Settings & Privacy.”
5. Next, select “Data Privacy” from the left sidebar and click on “Data for Generative AI Improvement” on the right.
6. Under the “How LinkedIn uses your data” section, look for the “Data for Generative AI Improvement” option.
7. Toggle the setting to “Off.”
These steps prevent LinkedIn from using your data for future artificial intelligence training.
Related article
Ralph Bach has been in IT for a while and has blogged from the Bach Seat about IT, careers, and anything else that has caught my attention since 2005. You can follow me on Facebook or Mastodon. Email the Bach Seat here.
The Ghosts of Mackinaw Island
Mackinac Island, set in the Straits of Mackinac, separates Michigan’s Upper and Lower Peninsulas. It has a timeless atmosphere and outstanding natural beauty. With attractions like Arch Rock, Mackinac Island is one of the Great Lakes region’s most scenic and charming attractions. The island has received many awards. In 2024, it was voted the “No. 1 Best Summer Travel Destination” in USA Today’s “10Best” Readers’ Choice awards. The island was ranked the fifth-best place in America to see Fall Foliage. Mackinaw Island also has a darker side. It is the home of many ghosts. In 2021, The Shadowlands Haunted Places Index named Mackinac Island the most haunted place in America.
Mackinac Island is a top-rated destination for tourists and ghosts. More than 100 individual ghosts have been reported on the island, making it one of the most haunted places in Michigan! The island’s original inhabitants were the Anishinaabek people (Odawa, Ojibway, and Potawatomi). The island was a sacred burial ground for the Native Americans. However, in the late 1600s, European expansion drove the native people out. Beginning in the 1790s, the British established a base on the island during the American Revolutionary War. Later, it was the site of two significant battles in the War of 1812. There was even a witch hunt on the island in the 1700s. All that history has made for some pretty diverse ghost reports. Here are a few of the most well-known Mackinaw Island ghosts.
Grand Hotel Ghosts
The stately Grand Hotel, with its record-breaking porch, is a serene place to sit and chill. However, the hotel is also well known for its paranormal activity. In 1887, the Grand was built over an old cemetery with so many dug-up skeletons that the excavators lost count. Legend says that the construction crew gave up on removing the bodies and instead built the Grand over the whole thing, causing the unsettled spirits to walk the grounds—and inside—the Grand Hotel.
One of the more well-known spirits is the “woman in black,” who walks her big white dog up and down the hotel’s massive front porch after dark. Another ghost is “Little Rebecca.” The little girl passed away on the grounds and haunts the fourth floor. She is often spotted floating or walking through the halls and disappearing to nowhere.
The local favorite is a story about an “evil entity” that appears as a black mass with glowing red eyes. A maintenance man working on the hotel’s theater stage reported that the black mass rushed after him and knocked him off his feet. He awoke two days later and never returned.
Mission Point Ghosts
What is now known as Mission Point Resort began in 1825 when Amanda and William Ferry built a home to “educate” native children. The home evolved into the Moral Re-armament Building, another haunted island building.
In 1942, wealthy people on Mackinac Island led the MRA in Michigan. The MRA rented the Island Hotel on Mackinac Island. In 1946, supporters bought the Mission Hotel, making Mackinac Island the MRA’s world headquarters. The MRA then established the short-lived (1966-1970) Mackinac College.

One of the island’s most famous ghosts is Harvey. Harvey was a student at Mackinaw College. Tradition says he was so in love with his girlfriend that he wanted to marry her, but she turned his proposal down.
According to legends, he went into the woods and committed suicide. He went missing in February. It took until July to find his body. Although suicide was the official cause of death, many believe that there was another person who was involved in his death.
Harvey, the ghost, is said to flirt with women and be a practical joker with men. Others have reported hearing disembodied voices whispering in their ears and feeling watched or observed.
Lucy
The MRA buildings eventually became Mission Point Resort, a destination-style vacation complex. “Lucy” haunts Mission Point Resort. Tradition says that Lucy was suddenly taken ill on the island, but her parents had to leave to take care of business in Detroit. She died before her parents got back. Locals and tourists report seeing the apparition of a little girl on the balcony of Mission Point and hearing a young girl. The SyFy Channel’s TV show Ghost Hunters featured Mission Point Resort.
Drowning Pool Ghosts
Drowning Pool: In the early 1700s, when Fort Mackinac was at its heyday, many brothels popped up. The good people of Mackinaw accused seven women of being witches and enticing unsuspecting soldiers, fur traders, and husbands to their houses. They were subjected to a trial by water, also known as the “dunking” method. The women were tied to rocks and thrown into a lagoon between Mission Point and downtown Mackinac. If they sank, they were deemed innocent; if the accused floated, they were considered guilty. All seven women were innocent because they sank and drowned. Thus resulting in the drowning of seven women in the Drowning Pool.
The fear of witchcraft in colonial America was deeply rooted in the belief that women who did not conform to the expected roles of purity and chastity were more susceptible to the devil’s influence.
Visitors and residents report splashing, shadows, and dark figures floating above the surface of the Drowning Pool. Many believe the figures are the ghosts of seven drowned women.
Related article
Ralph Bach has been in IT for a while and has blogged from the Bach Seat about IT, careers, and anything else that has caught my attention since 2005. You can follow me on Facebook or Mastodon. Email the Bach Seat here.
Deepfakes Threaten Democracy: Trump Most Faked
With the 2024 U.S. presidential election looming, a new report from Kapwing examines deepfakes of public figures. Deepfakes are artificial intelligence (AI) that uses machine learning to create or manipulate videos or audio to make it look or sound like someone is saying or doing something they never actually did. What they found is alarming for Democracy. Their research found that Donald Trump and his fellow traveler Elon Musk are the most frequently deepfaked politicians. The Kapwing study tracked deepfake video requests using text-to-video AI tools. It found that 64% of the deepfaked videos were of politicians and business leaders.
The most deepfakes
The Kapwing video content platform analyzed deepfaked politicians. The platform’s top deepfaked politician is Donald Trump. The Republican candidate topped the list with 12,384 deepfake videos. Trump was followed closely by Elon Musk, the CEO of Tesla and X (formerly Twitter), with over 9,500 deepfakes. Current US President Joe Biden ranked third with 7,596 deepfakes.
The prominence of Trump as a deepfake target underscores the growing risk this technology poses to Democracy. Attackers can weaponize deepfake politicians to spread misinformation, influence, or deceive voters. Eric Lu, the co-founder of Kapwing, says weaponization is already occurring, “The findings of our study clearly show that video deepfakes have already gone mainstream…”
Social media’s role
Social media platforms are often the primary channels for deepfakes, boosting their popularity. Kapwing’s study urges platforms to take responsibility for disseminating deepfaked media. Lu, who conducted the study, blames the social media companies, saying, “Social media platforms like YouTube, Instagram, Facebook, and X have an important responsibility to prevent fake news or financial scams early on before the posts go viral.”
When deepfakes attack
Deepfake attacks have already occurred. These are some prominent examples. First, in September 2024, during a Zoom call, Senator Ben Cardin, chair of the United States Foreign Relations Committee, was the victim of a sophisticated deepfake impersonation. The impersonator posed as Dmytro Kuleba, Ukraine’s former Foreign Affairs Minister, and attempted to elicit politically charged responses regarding the upcoming U.S. Presidential election.
Then, in January 2024, voters in New Hampshire received a deepfake robocall purporting to be from President Joe Biden. The New Hampshire attorney general’s office released a statement debunking the hoax. The Feds later traced the calls to a political consultant.
Another incident took place in November 2023. A deepfake audio of London Mayor Sadiq Khan’s voice making remarks critical of Armistice Day, which marks the end of World War One, was leaked. Finally, a video emerged in April 2018 of former U.S. President Barack Obama where the so-called ‘Obama’ utters uncharacteristic profanities.
The deepfakes regulatory challenge
Efforts to regulate deepfakes face hurdles. For instance, in October 2024, a federal judge blocked AB 2839, a California law allowing individuals to sue over election-related deepfakes on the grounds of First Amendment concerns.
Another attempt at regulating deepfakes came in April 2024. The Federal Communications Commission outlawed robocalls that contained voices generated by artificial intelligence. This decision conveys that exploiting technology to scam people and mislead voters will not be tolerated.
This legal challenge highlights the difficulty of crafting effective regulations that address the threats posed by deepfake technology without infringing on free speech.
However, due to the increased sophistication of Generative AI, tech platforms and regulators must balance innovation and security.
How to stop deepfakes
Lu proposes several steps to combat deepfakes. First, he calls for watermarked AI-generated content. This would involve integrating built-in encrypted timestamps on all recording devices to create a watermark at the moment of capture. The encrypted watermarks can be based on the highly secure Public Key Infrastructure (PKI) to distinguish authentic content from deepfakes. Next, the CEO suggests that social media platforms add clear labels on deepfake videos. He also laments that a comprehensive solution remains elusive.
To spot deepfakes, Lu says: “My top three tips are looking for a blurry mouth area or inconsistent movement of the teeth, watching out for unnatural blinking or lack of blinking, and listening for monotone voices and unnatural breathing patterns.”
rb-
The biggest problem with deepfakes is the software. The perverse thing is that candidates can now deceive voters by claiming that actual events are AI-manufactured deep fakes and discredit facts.
The Kapwing report paints a concerning picture of deepfakes targeting politicians, particularly Donald Trump. These manipulated videos and audio significantly risk Democracy by spreading misinformation and swaying voters. While legal regulations to curb deepfakes face free speech challenges, there’s still hope.
The fight against deepfakes requires a multi-pronged approach. It’s a race against continuously evolving AI, but by combining technological solutions, responsible social media practices, and public awareness, we can safeguard Democracy from the manipulative power of deepfakes. After all, a well-informed public is the first line of defense against misinformation.
Related article
Ralph Bach has been in IT for a while and has blogged from the Bach Seat about IT, careers, and anything else that has caught my attention since 2005. You can follow me on Facebook or Mastodon. Email the Bach Seat here.
Data Breach Hits Internet Archive Users
Updated—10/21/2024—The Verge reports that the Internet Archive is under the influence of attackers. Despite being back online in Read Only mode, it seems the attackers control the IA help desk. According to reports, the attackers have a Zendesk token and can intercept tickets.
—
Updated – 10/16/2024 – TechRadar reports that the attack used two attack vectors: TCP reset floods and HTTPS application layer attacks. The TCP flood will flood a victim with vast numbers of Transmission Control Protocol (TCP) reset packets, which trick a computer into terminating its connection with others in its network. An HTTPS application layer attack will typically aim to overwhelm servers by targeting the application layer to disrupt the normal traffic flow, rendering regular services unavailable.
—
The non-profit Internet Archive has been offline since Tuesday (10/09/2024). Founded in 1996, the Internet Archive digital library provides “universal access to all knowledge.” Through the Wayback Machine, it preserves billions of webpages, texts, audio recordings, videos, and software applications.
Internet Archive founder Brewster Kahle posted on X (formerly Twitter) that the site was under a DDoS attack.
Later on Tuesday, the attack evolved. The site started displaying a hacker pop-up notification. After closing the message, the site loaded typically but very slowly. The pop-up said:
“Have you ever felt like the Internet Archive runs on sticks and is constantly on the verge of suffering a catastrophic security breach? It just happened. See 31 million of you on HIBP!”
HIBP refers to Have I Been Pwned?, a website where people can check to see if their information has leaked from cyber attacks.
Finally, the pop-up was gone, along with the rest of the site, leaving only a placeholder message saying:
“Internet Archive services are temporarily offline.”
Stolen Internet Archive data
On September 28, 2024, attackers stole the site’s user authentication database with 31 million unique records. Bleeping Computer confirmed that Have I Been Pwned had received an “ia_users.sql” database file containing authentication information for registered members, including their email addresses, screen names, password change timestamps, Bcrypt-hashed passwords, and other internal data.
Who is responsible
The hacktivist group SN_BlackMeta, which emerged in November 2023, claimed responsibility for the DDoS attack. Cybersecurity firm Radware connected SN_BlackMeta to a pro-Palestinian hacktivist movement that utilizes DDoS-for-hire services like InfraShutdown. SN_BlackMeta has launched other cyberattacks, including a record-breaking DDoS attack against a Middle Eastern financial institution.
It’s unclear if they are involved in the Internet Archive data breach. The group said that it carried out the DDoS attack because the United States supports Israel and that the Internet Archive “belongs to the USA.”
Many social media users quickly pointed out that the Internet Archive is an independent non-profit organization not affiliated with the U.S. government.
Internet Archive Back online – sorta
10/14/2024, it is back in a limited read-only way
rb-
Finally, what do you need to do if you have an account at the Internet Archive?
A compromised password is always a concern in any breach. But in this case, the passwords were salted and hashed, making them difficult to crack through reverse engineering or brute force. Still, once the Internet Archive returns, you should change your password to be safe.
Related article
Ralph Bach has been in I.T. for a while and has blogged from the Bach Seat about I.T., careers, and anything else that has caught my attention since 2005. You can follow me on Facebook or Mastodon. Email the Bach Seat here.




