Microsoft launches its new AI tool to detect text and images deemed “Harmful Content”
Microsoft is launching a new AI-powered moderation service that it says is designed to foster safer online environments and communities. Called Azure AI Content Safety
Microsoft is launching a new AI-powered moderation service that it says is designed to foster safer online environments and communities. Called Azure AI Content Safety, the new offering, available through the Azure AI product platform, offers a range of AI models trained to detect “inappropriate” content across images and text.
The models — which can understand text in English, Spanish, German, French, Japanese, Portuguese, Italian, and Chinese — assign a severity score to flagged content, indicating to moderators what content requires action. “Microsoft has been working on solutions in response to the challenge of harmful content appearing in online communities for over two years.
We recognized that existing systems weren’t effectively taking into account context or able to work in multiple languages,” the Microsoft spokesperson said via email. “New [AI] models are able to understand content and cultural context so much better. They are multilingual from the start … and they provide clear and understandable explanations, allowing users to understand why content was flagged or removed.”
During a demo at Microsoft’s annual Build conference, Sarah Bird, Microsoft’s responsible AI lead, explained that Azure AI Content Safety is a productized version of the safety system powering Microsoft’s chatbot in Bing and Copilot, GitHub’s AI-powered code-generating service.
“We’re now launching it as a product that third-party customers can use,” Bird said in a statement. Presumably, the tech behind Azure AI Content Safety has improved since it first launched for Bing Chat in early February. Bing Chat went off the rails when it first rolled out in preview; our coverage found the chatbot spouting vaccine misinformation and writing a hateful screed from the perspective of Adolf Hitler. Other reporters got it to make threats and even shame them for admonishing it.
In another knock against Microsoft, the company just a few months ago laid off the ethics and society team within its larger AI organization. The move left Microsoft without a dedicated team to ensure its AI principles are closely tied to product design.
Go paid at the $5 a month level, and we will send you both the PDF and e-Pub versions of “Government” - The Biggest Scam in History… Exposed! and a coupon code for 10% off anything in the Government-Scam.com/Store.
Go paid at the $50 a year level, and we will send you a free paperback edition of Etienne’s book “Government” - The Biggest Scam in History… Exposed! AND a 64GB Liberator flash drive if you live in the US. If you are international, we will give you a $10 credit towards shipping if you agree to pay the remainder.
Support us at the $250 Founding Member Level and get an Everything Bundle – The Sampler of Liberty! - Get it free by going paid as a Founding Member!
Give me Liberty... and give me more! The Everything Bundle includes the latest version of our flagship book on government, along with a collection of potentially life-altering introductions to voluntaryism, agorism and peaceful anarchy.
“Government” – The Biggest Scam in History… Exposed! by Etienne de la Boetie2
Anarchy Exposed! - A former police officer reports on his investigative journeyby Shepard the Voluntaryist and Larken Rose
The Most Dangerous Superstition by Larken Rose
Sedition, Subversion and Sabotage – Field Manual #1 by Ben Stone, The Bad Quaker, and Ken Yamarashi
What Anarchy Isn’t– A short pamphlet by Larken Rose… The perfect introduction to peaceful anarchy
Three Friends Free – A Children’s Story of Voluntaryism
The Liberator is a 64GB wafer flash drive filled with books, documentaries, podcasts, MP3s, short videos, and music from the truth movement’s leading artists. The credit card-sized format makes it convenient to keep in your wallet to share and copy easily.