cover image of podcast Shift
red_WST_logo

Measuring AI’s Transparency

Measuring AI’s Transparency

Stanford University researchers have recently conducted a comprehensive assessment of the transparency levels exhibited by leading AI foundation models, including those developed by prominent entities like OpenAI and Google (GOOGL.O). Their findings are rather concerning: they have revealed that these models have fallen short when it comes to transparency.

In their report, the researchers emphasize the urgent need for these companies to disclose critical information about the processes involved in training these AI models. Specifically, they are calling for a clearer picture regarding the data sources and the extent of human labor utilized during the training phase.

What makes this call for transparency all the more pressing is the researchers’ observation of a concerning trend over the past three years: while AI capability has soared to new heights, transparency has been on a steady decline. This decline in transparency is a cause for alarm because it mirrors similar trends seen in other domains, such as social media, where decreased transparency has led to undesirable consequences. In essence, the researchers view this trend as a potential for problematic outcomes in the field of AI.

About the Podcast

SHIFT is a podcast examining our ever-changing trust in automation and emerging technologies, and is hosted by Jennifer Strong. The podcast started in 2023 and takes a closer look at the far-reaching impact of automation on our daily lives. From policy to process, they seek to understand how lives are changing alongside rapid breakthroughs in frontier technologies and artificial intelligence.

SHIFT’s award-winning team of journalists boasts an impressive track record of more than two decades of extensive expertise in the technology and media landscape. From policy to process, they demonstrate a profound commitment to comprehending the evolving landscape of people’s lives amidst the rapid advancements in frontier technologies and artificial intelligence. The most pressing issues in the tech sphere, such as deepfakes, bias, explainability, and privacy, all revolve around the concept of trust, which serves as a common thread in their investigative work. They excel at elucidating the intricate nuances surrounding how trust is bolstered in an era dominated by automation.

Delivered on a weekly basis, their show format seamlessly blends the tenets of “show me” journalism with a diverse repertoire of content, including fireside chats, oral histories, narrative storytelling, and documentary-style investigations. Their platform offers intimate discussions with an array of luminaries, from innovators and scientists to policymakers and entrepreneurs, often recorded in the very laboratories and research hubs that drive a deeper understanding of the unfolding technological landscape.

Tags

Like this article?

Spread the word and share this article on social media!

Imprint          Privacy Policy          © 2021 We Shape Tech

Job

Openings

Welcoming

Diversity

Our newsletter will keep you posted on featured openings. Sign-up now!

HSLU is looking for a "Python for Data Scientists" Lecturer

HSLU is looking for a "Big Data Tools" Lecturer

Crack the Confidence Gap | 28 June 2017

Get in touch to attract diverse talent!

This website uses cookies to ensure you get the best experience on our website. Please refer to our Privacy Policy for more information.

Join Your Fellow Shapers

Don't Miss a Thing

Sign up to receive our monthly newsletter and you’ll never miss an event or insightful article!

By signing-up you agree to our Privacy Policy.