Discover and read the best of Twitter Threads about #CSCW2022

Most recents (5)

Are women of color political candidates more likely to be subject to disinformation & online abuse?🧑‍⚖️❌🏛️

🚨What are its impacts?

My lab teamed up with the Center of Democracy & Technology to conduct a large scale data analysis of this topic 👩‍🔬💻

👉See: osf.io/bwta3/ photo of the title of the a...
My research lab has also been working on designing intelligent tools that can combat disinformation within the communities of people of color.

📚🧪Read our latest CSCW paper with @saviaga + @BunsenFeng on this topic:

👉dl.acm.org/doi/pdf/10.114…
#CSCW2022 #CSCW black woman in the 60s givi...
We conducted a similar study on Disinformation and women political candidates in Mexico.

We found cases of offline disinformation, strategic silences, & men overall got more attention than women.

See the work we did with UNAM+ @PitPolicy @pintomar43 ⬇️
policylab.tech/_files/ugd/0e0… photo of women protesting i...
Read 3 tweets
Do you think that the ethical reputation of a company impacts whether graduating computer science students are willing to take a job there?

This was the topic of @CUBoulder CS alum Ella Sarder's honors thesis, now published as a poster at #CSCW2022. Here's what we found... 🧵
This was an exploratory study (and we're working on a larger-scale survey as a follow up!); she interviewed 12 graduating students about factors they consider in the job search, how they define a “good” or “bad” company to work for, and how ethics education impacts their choices.
Some participants expressed a sense of powerless regarding their ability to change unethical practices - which might result in deciding not to take a job there, or might result in thinking it's the same everywhere, so they might as well consider their own self interest.
Read 8 tweets
Today, technical experts hold the tools to conduct system-scale algorithm audits, so they largely decide what algorithmic harms are surfaced. Our #cscw2022 paper asks: how could *everyday users* explore where a system disagrees with their perspectives? hci.st/end-user-audit 🧵 End-User Audits: System-scale algorithm audits led by indivi
(2/6) User-led audits at this scale are challenging: just to get started, they require substantial user effort to label and make sense of thousands of system outputs. Could users label just 20 examples and jump to the valuable part of providing their unique perspectives?
(3/6) Our IndieLabel auditing tool allows users to do exactly this. Leveraging varied annotator perspectives *already present* in ML datasets, we use collaborative filtering to help users go from 20 labels to 100k so they can explore where they diverge from the system’s outputs. Our IndieLabel tool proceeds from labeling (N=20 labels) to
Read 6 tweets
Our new research estimates that *one in twenty* comments on Reddit are violations of its norms: anti-social behaviors that most subreddits try to moderate. But almost none are moderated.

🧵 on my upcoming #cscw2022 paper w/ @josephseering and @msbernst: arxiv.org/abs/2208.13094
First, what does this mean? It means if you are scrolling through a post on Reddit, in a single scroll, you will likely see at least one comment that exemplifies bad behaviors such as personal attacks or bigotry that most communities would choose not to see. (2/13)
So let’s get into the details. What did we measure exactly? We measured the proportion of unmoderated comments in the 97 most popular subreddits that are violations of one of its platform norms that most subreddits try to moderate (e.g., personal attacks, bigotry). (3/13)
Read 13 tweets
So excited I can FINALLY share our new work on machine learning practitioners' data documentation perceptions, needs, challenges, and desiderata, which will appear in #CSCW2022!

arxiv.org/abs/2206.02923

Joint work w/ @AmyHeger, @lizbmarquis, @mihaela_v, and @hannawallach 1/n
Data is central to the development & evaluation of ML models. Using problematic or inappropriate datasets can lead to harms.

Data documentation frameworks like datasheets & data nutrition labels were proposed to encourage transparency and deliberate reflection on datasets. 2/n
But do these frameworks meet the needs of ML practitioners who create and consume datasets?

We conducted a series of semi-structured interviews with 14 ML practitioners and had them answer a list of questions borrowed from datasheets for datasets. 3/n
Read 7 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!