Tag Archive for AI

Can You Identify AI Images? Find Out Now!

Can You Identify AI Images? Find Out Now!Artificial Intelligence (AI) is the tech-du jour.  A significant 77% of consumers have already interacted with AI platforms in their daily lives. Nevertheless, when distinguishing between machine-generated images and real ones, only 40% of people are successful. Although some images are straightforward, prepare to be amazed by how realistic AI-generated images can be—or how peculiar real-life photos can appear. To test your AI acumen, Microsoft has designed a quiz. Can you accurately identify AI-generated images from real ones?

Real or Not quiz

SpidermanThe Real or Not quiz features fifteen distinct photos curated by Brad Smith, Microsoft’s Vice Chair and president. To begin with, your task is to determine whether each image is machine-generated or a genuine photograph. Furthermore, the quiz dynamically selects different images for each attempt, allowing you to test your AI senses continually.

My repeated attempts at the quiz yielded no straightforward answers. The BOT’s ability to produce convincing images is impressive. My highest score was 11 out of 15, achieving a commendable 73%. Finally, the results also provide context by comparing your score to other Real or Not participants. You can also include your scores at the bottom of this to see how you score compared to other Bach Seat readers.

Clues to AI

When examining images, watch for subtle clues that reveal whether they are AI-generated or authentic. While the expert AI excels at creating overall authentic scenes, scrutinizing finer details often reveals peculiarities.

AI Yoga Fail Pay attention to:

  • Doors merging.
  • Ladders that lead to nowhere.
  • Heavy machinery that is oddly placed and appears pristine.

Consider the eyes: Are they natural or flat? As for hands, AI still struggles. While improvements have been made, odd-looking fingers persist in AI images. Conversely, complex hand gestures or positions often indicate real photos.

rb-

These Artificial Intelligence-generated images pose a significant threat in political and cultural contexts. Any user can fabricate compromising images of public figures. This underscores the importance of vigilant scrutiny. Those who mindlessly doom scrolling may miss the subtle clues that reveal the true nature of the images they encounter.

Remember, as AI technology continues to evolve, our ability to discern between real and AI-generated content becomes even more critical. Stay curious, stay informed, and always question what you see online. 

Take the Microsoft Artificial Intelligence quiz and post your results here.

View Results

Loading ... Loading ...

 

Related article

 

Ralph Bach has been in IT for a while and has blogged from the Bach Seat about IT, careers, and anything else that has caught my attention since 2005. You can follow me on Facebook or Mastodon. Email the Bach Seat here.

What You Could do with the NVIDIA Record Loss this Week?

Artificial intelligence bellwether stock NVIDIA (NVDA) announced its 2025 Q2 fiscal results on Tuesday. America’s second-largest public company ended the quarter with $30.04 billion in revenue. However, shares dropped 9.5%, leading to a $278.9 billion reduction in the company’s value.

What You Could do with the NVIDIA Record loss this week?

Analysts attribute NVIDIA’s stock decline to its Q3 revenue guidance of $32.5 billion, below the Wall Street ‘whisper number’ of $33 billion to $34 billion.

NVIDIA’s $278.9 billion loss is the largest single-day loss by a U.S. company, surpassing Meta’s $237 billion loss in February 2022.

Unexpected NVIDIA Q3 guidance

The unexpected Q3 guidance miss triggered a sell-off, likely driven by NVIDIA’s AI chips in trading systems, causing the $278.9 billion decline.

RB-

Sure, I could write about an AI Hype Cycle, a rickety economy, or a DOJ investigation with a 50/50 chance of a convicted felon becoming President. But it seemed more fun to put this loss into perspective:

  • A stack of $100 bills totaling $1 million would be about 43 inches tall (just over 3.5 feet). To put the enormity of NVIDIA's loss into perspective
  • Stacking $278.9 billion in $100 bills would reach approximately 189 miles, the distance from New York City to Washington, D.C.
  • It would also fill the Empire State Building 25 times over.

Or I could:

  • Buy Pebble Beach golf course for $3.2 Billion,
  • All 32 teams in the National Hockey League. The NHL can be got for $41.9 Billion, as well,
  • All 32 National Football League franchises. The entire NFL is worth $162 Billion,
  • End Homelessness in the U.S.
  • And still have cash on hand.

 

Related article

 

Ralph Bach has been in IT for a while and has blogged from the Bach Seat about IT, careers, and anything else that has caught my attention since 2005. You can follow me on Facebook or Mastodon. Email the Bach Seat here.

VR You Can Taste

VR You Can TasteDuring the COIVD-19 lockdowns and social distancing, every generation has increased the use of their devices to inform and distract more than ever before. Wouldn’t it be great if our devices could encompass all of our senses? Well, that time is coming. Homei Miyashita a researcher at Meiji University in Japan has developed the Norimaki Synthesizer which can make the tongue sense taste without eating anything.

It was once thought that tongues had different regions for each taste.It was once thought that the tongue had different regions with concentrations of specific taste buds for each taste. Now we know that there are five basic tastes are sweet, sour (or acidic), salty, bitter, and umami. Bitter flavors are sharp, like coffee, unsweetened chocolate, or the peel of an orange or lemon. Umami is derived from the Japanese word for a pleasant savory taste, was added to the basic tastes group in 1990.

Taste buds have a chemical reaction to food

Taste buds have tiny openings that take in very small amounts of whatever we’re eating. Special “receptor cells” in the taste buds can then have a chemical reaction to the food, creating one of five basic tastes. The way these basic tastes combine creates the overall flavor of the food we’re eating.

SVCOnline explains a better understanding of how the tongue works is crucial to the new device. In order to trick your tongue, the device uses electrolytes inserted into five gels that trigger the five different tastes when they make contact with the human tongue. Gizmodo reports the color-coded gels, made from agar formed in the shape of long tubes to create tastes. The device uses:

The taste device

When the device is pressed against the tongue, the user experiences all five tastes at the same time. But, by using a small box with sliding controls the amount of different tastes can be lowered, creating different flavors. Sadly, it can’t produce the effect of spicy foods.

To create the different flavors the device is wrapped in copper foil so that when it’s held in hand and touched to the surface of the tongue, it forms an electrical circuit through the human body, facilitating a technique known as electrophoresis.

Electrophoresis is a process that moves molecules in a gel when an electrical current is applied. In this case, this process causes the ingredients in the agar tubes to move away from the tongue end of the tube, reducing the ability to taste them. It’s a subtractive process that selectively removes tastes to create a specific flavor profile – from gummy bears to sushi.

The device’s creator, Homei Miyashita, was inspired to create his “taste display” by experiments that proved our eyes can be tricked into seeing something that technically doesn’t exist. He wondered if the red, green, or blue pixels that make up the screens on your smartphone, PC, and TV could fool the eye, could he create something that could fool the tongue? Mr. Miyashita used a similar “pixel” approach o trick the tongue.

In his abstract, Professor Miyashita acknowledged the 2011 research of Hiromi Nakamura, who achieved “augmented gustation” by sending electrical charges through chopsticks, forks, and straws to create tastes humans could not perceive solely with their tongues.

Smell-O-Vision

Other inventors have tried to expand the senses for the media. In 1959, Charles Weiss, a public relations executive, created AromaRama. AromaRama distributed scents of horses, grass, exploding firecrackers, incense, and burning torches through the theater’s air-conditioning system during the first showing of “Behind the Great Wall.” But the NYT panned the movie, “Check off the novel experience as… a stunt. The artistic benefit of it is here demonstrated to be nil.”

Smell-O-VisionThe next year, inventor Hans Laube introduced an improved Smell-O-Vision with the movie “Scent of Mystery” which was augmented by smells such as freshly baked bread, wine, an ocean breeze, or a skunk delivered through beneath-the-seat tubes. Certain smells offered clues to imminent activity on the screen. But viewers complained of uneven or delayed distribution of smells, and the distracting noises of viewers struggling to sniff each scent. For fans and critics, the movie was a stinker. Famed comedian Henry Youngman quipped, “I didn’t understand the picture. I had a cold.

 rb-

It’s called a taste display because it was inspired by the way RGB pixels accumulate on a screen form an image of something that isn’t there. These electronic “taste pixels” can be manipulated to simulate any taste. Why? No idea. – But there will be an app for that too!

Stay safe out there!

Related article

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.

No More Facial Recognition From IBM

Updated 06/19/2020 – Redmond is reporting that the ACLU has uncovered evidence (PDF) that Microsoft was pursuing sales of its facial recognition technology after its vow to stop selling the software. The ACLU says Microsoft continued to pursue sales to the U.S. Drug Enforcement Administration (DEA) six days after the announcement. Microsoft president Brad Smith claimed the firm would stop selling facial recognition tech to U.S. police agencies until there is a national law in place that’s “grounded in human rights.”

The article calls MSFT’s Smith’s “stand” last week “as a bit hollow or misleadingly narrow” and “opaque transparency.”

Updated 06/12/2020 – CNN is reporting that Microsoft has fallen in line with IBM and Amazon. It has announced it will not sell facial recognition technology to police departments in the United States, at least until there is a federal law to regulate the technology.

Following IBM’s stand, Amazon has announced it will stop providing its facial recognition technology to police forces for one year.  TechCrunch makes the point that the Amazon announcement did not say if the moratorium would apply to the federal government. Amazon also did not say in the statement what action it would take after the yearlong moratorium expires.

Both firms are calling for national regulation of the tech. As I predicted below.

No More Facial Recognition From IBMIBM has made a step in the right direction in the fight against structural racism. IBM CEO Arvind Krishna sent a letter to the U.S. Congress citing concerns that artificial intelligence (AI) facial recognition software could be used for mass surveillance and racial profiling. As a result, IBM will no longer sell general-purpose facial recognition or analysis software.

IBM facial recognition changes

The company is not abandoning facial recognition. Reuters cites an IBM source that says, IBM will “no longer market, sell or update the products but will support clients as needed.” As Engadget points out, the move comes in the midst of protests over police brutality and discrimination capped by the apparent murder of George Floyd by Minneapolis police officers.

The use of AI and facial recognition has a history of privacy and bias problems. In 2019, Pew Research reported that  50% of U.S. adults said they did not trust tech companies to use facial recognition responsibly. 27% of the same group did not trust law enforcement agencies to use facial recognition responsibly. There are good reasons for the distrust of facial recognition. Many reports have found that facial recognition systems can be biased. They have systemic bias’ against non-whites and women. This is particularly true if the training data includes relatively few people from those groups. 

The Verge documents some of the defacto bias’ in facial recognition. In 2018, AI researchers Joy Buolamwini and Timnit Gebru, Gender Shades project was the first to reveal the extent to which many commercial facial recognition systems (including IBM’s) were biased. This work led to mainstream criticism of these algorithms and ongoing attempts to address bias.

Clearview AI Inc., facial recognition software identifies people by comparing their faces with 3 billion images many scraped from social media sites. Clearview took the images from Facebook, YouTube, and Venmo without notifying the people. The facial recognition tool is widely used by private sector companies and law enforcement agencies. Clearview has since been issued numerous cease and desist orders and is at the center of a number of privacy lawsuitsFacebook was also ordered in January 2020 to pay $550 million to settle a 2015 class-action lawsuit over its unlawful use of facial recognition technology.

The Verge points out that IBM is not without a share of the blame. IBM was found to be sharing a training data set of nearly one million photos in January 2019 taken from Flickr without the consent of the subjects. IBM told The Verge in a statement at the time that the data set would only be accessed by verified researchers and only included images that were publicly available. The company also said that individuals can opt out of the data set.

A December 2019 NIST study found:

empirical evidence for the existence of a wide range of accuracy across demographic differences in the majority of the current face recognition algorithms that were evaluated.

 

Amazon’s facial recognition software 

Notably, NIST’s study did not include Amazon’s facial recognition software Rekognition. Rekognition, has also been criticized for its accuracy. In 2018, the ACLU found that Rekognition incorrectly matched 28 members of Congress to faces picked from 25,000 mugshots.

Despite Amazon’s system providing what the ACLU called a disproportionate number of false matches of congress embers of color, Amazon posted a statement expressing concern over the “inequitable and brutal treatment of Black people in our country.” But the richest man in the world Jeff Bezos and his company are part of the problem. Amazon is profiting off racial profiling of Black people by police.

Amazon has built a nationwide surveillance network. The surveillance network of our homes and communities uses Amazon Ring cameras and its Neighbors app. The company collects the images and then handed its data over to the police. 

What Amazon does with the data:

rb-

Mr. Krishna should be applauded for his public stand. But call me cynical – this is also about business. Morgan Stanley predicts that AI and automation will be a one trillion dollar industry by 2050. Change is coming and big tech – IBM, MSFT, GOOG, FB are trying to get in front of it. The titans are pushing for reform – not abolition for two reasons.

First, they want to use new regulations as a barrier to entry into this market. They want to upstarts like Clearview AI and 45+ other small to multi-national firms who may have new ideas out of the $1T market.

Second – Big tech knows they can buy the politicians in DC cheaper than having to fight off regulations in 50 different states. Big business has done this time and again. they will sit in front of a congressional hearing – say mea culpa and maybe Congress will pass some lame regulation that the lobbyist wrote. Nothing will change because there is too much money on the table to do the right thing to stop the structural racism that led to George Floyd’s death.

Stay safe out there!

Related article

 

Ralph Bach has been in IT long enough to know better and has blogged from his Bach Seat about IT, careers, and anything else that catches his attention since 2005. You can follow him on LinkedInFacebook, and Twitter. Email the Bach Seat here.