Youtube captions explicit kid’s videos


Youtube captions explicit kid’s videos: ALMOST 400 000 PEOPLE are currently subscribed to the YouTube channel Rob the Robot – Learning Videos For Children. In one of the videos for the year 2020, an animated humanoid and his companions travel to a planet with a stadium theme and seek to perform feats that Hercules would be proud of. Their exploits are appropriate for children in primary school, but young readers who turn on the closed captioning feature on YouTube may increase their vocabularies.

Youtube captions explicit kid's videos

At one point, the computers that power YouTube misunderstands the word “brave” and label a character as wanting to be “powerful and rape like Heracles.”

Youtube captions explicit kid’s videos

A recent analysis of the algorithmic captions that appear on films geared at children on YouTube demonstrates how the wording can occasionally deviate into quite an adult territory. Forty percent of the movie in a sample of over 7,000 videos from 24 of the most popular kids’ channels had words in their subtitles discovered on a list of 1,300 “taboo” phrases. This list was partly developed from a study on kids’ use of curse words.

The captions of almost one percent of videos contained words from a list of sixteen phrases deemed to be “very unsuitable.” The words “bitch,” “bastard,” and “penis” were the most likely to be added by YouTube’s algorithms.

The issue is demonstrated in several videos uploaded to Ryan’s World, a popular kids’ channel with more than 30 million followers. The line “You should also buy corn” appears in the captions of one of the images.

However, it is written as “you should also buy porn.” In other films, a “beach towel” is renamed a “bitch towel,” “buster” is changed to “bastard,” and “crab” is renamed “crap.” A “bed for penis” is also included in a DIY video that demonstrates how to make a monster-themed dollhouse.

According to Ashique KhudaBukhsh, an assistant professor at Rochester Institute of Technology who studied the issue with partners Krithika Ramesh and Sumeet Kumar at the Indian School of Business in Hyderabad, “It’s astonishing and alarming.”

On YouTube Kids, the version of the service designed specifically for use by children, automated captions are not available. However, many households utilize the regular version of YouTube, which allows them to be viewed by others.

According to research by the Pew Research Center in 2020, eighty percent of parents of children aged 11 or younger claimed that their child watched content on YouTube; more than fifty percent of youngsters did so daily.

The author of the study, KhudaBukhsh, has high hopes that it will bring widespread attention to a phenomenon that, according to him, has received little attention from tech companies and researchers.

KhudaBukhsh refers to this phenomenon as “inappropriate content hallucination,” It occurs when algorithms add inappropriate material that was not present in the original content. Think of it as the opposite of the general observation that autocomplete on smartphones frequently filters adult language to an irritating degree. In this case, the filtering is for inappropriate language.

A spokeswoman for YouTube named Jessica Gibby stated that users younger than 13 years old should utilize YouTube Kids because it blocks the display of automated captioning. She claims that the feature makes YouTube more accessible to users of the basic version of the website. She believes that ongoing efforts are being made to improve automatic captions and reduce the number of inaccuracies that they contain.

Pocket. The watch is a children’s entertainment studio publishing content related to Ryan’s World. A spokesperson for the company, Alafair Hall, said in a statement that the company is “in close and immediate contact with our platform partners such as YouTube who work on updating any incorrect video captions.” It was impossible to get in touch with the channel owner of Rob the Robot to get their comment.

It’s not just YouTube or video captions that have a problem with inappropriate hallucinations. A reporter for WIRED discovered that a transcript of a phone call processed by the company Trint converted the woman’s name Negar, which is of Persian origin, as a variation of the N-word. This occurred even though the name is pronounced in a manner that is distinguishable to the human ear.

The Chief Executive Officer of Trint, Jeffrey Kofman, stated that the service is equipped with a profanity filter that will automatically redact “a very narrow list of words.” According to Kofman, the precise spelling used in WIRED’s transcript was not on that list, but it will be added in the future.

According to KhudaBukhsh, “the benefits of speech-to-text are clear; nonetheless, there are blind spots in these systems that potentially require checks and balances.”

Youtube captions explicit kid's videos

Because humans make sense of speech in part by comprehending the broader context and meaning of a person’s words, these blind spots can appear startling to human beings. Even while algorithms have become better at processing language, they are still unable to comprehend what they are being fed comprehensively. This limitation has led to issues for businesses that rely on machines to handle text. After discovering that one startup’s adventure game occasionally described sexual scenarios involving youngsters, the game had to be reworked entirely.

In the context of this study, machine learning algorithms “learn” a task by processing significant amounts of training data, which takes the form of audio files and matching transcripts.

According to KhudaBukhsh, there is a good chance that YouTube’s technology occasionally adds profanities because the training data it uses has a disproportionately high number of adult voices compared to those of youngsters. When the researchers evaluated samples of unsuitable terms in captions by hand, they discovered that these words frequently appeared in conjunction with the speech of youngsters or individuals who gave the impression of not being native English speakers.

Previous research has shown that transcription services provided by Google and other major technology companies make more mistakes when translating non-white speakers’ speech into standard American English. They make fewer mistakes when translating regional American English dialects into standard American English.

A basic blocklist of phrases that should not be used in films uploaded by children to YouTube, according to linguist Rachael Tatman, who was a co-author of one of those earlier studies, would solve a significant number of the most problematic examples uncovered in the new research. She claims that the fact that there is reportedly no one is due to an engineering oversight.

According to Tatman, a blocklist is another imperfect solution to the problem. Words that, on their own, are not inappropriate can be combined to form wrong phrases.

When working on content for children, a more sophisticated technique would be to tailor the captioning system to avoid using adult language. Tatman, however, warns that this strategy wouldn’t be flawless. The machine-learning software that works with language can be statistically directed in particular directions. Still, building this software to respect context that appears clear to humans is difficult. T

atman maintains that language models are not precision tools of any kind.

Even though KhudaBbukhsh and his coworkers developed and tested algorithms to fix forbidden words in transcripts, even the most successful systems only substituted the appropriate term for YouTube transcripts less than a third of the time. They will report their findings at the annual meeting of the Association for the Advancement of Artificial Intelligence this month.

In addition, they have made the data from their study publicly available to assist others in investigating the issue.

The researchers also utilized an automated transcription service provided by Amazon to transcribe the audio from children’s videos uploaded to YouTube. It also occasionally committed errors, making the information more compelling.

A spokesman for Amazon named Nina Lindsey declined to comment, but she did send links to material that advised developers on how to fix or filter unwelcome terms. The findings of the study imply that the following choices might be prudent when it comes to the transcription of information for children: The word “fluffy” was changed to the curse word “F” in the transcript of a video about a toy, and the host of the film invited viewers to send in “crap ideas” rather than “craft ideas.”

 Related Posts