- What: A study on CoC’s in OSS, with the expectation of developing resources afterwards from what we learn
- The ask: >30 minutes of your time in an interview. Sign up here.
A team of researchers, led by Dr. Shurui Zhou from the University of Toronto (UofT), is partnering with NumFOCUS for a project centered on comprehending the implementation of Code of Conduct (CoC) in open-source communities.
If you’re interested in @shuiblue’s work, take a look at her Sustain Podcast on Forking in Open Source here: Sustain Episode 53: What the Fork? Shurui Zou on Forking in Open Source. And Leah Silen’s on NUMFOCUS here: Sustain Episode 79: Leah Silen on how NumFocus helps makes scientific code more sustainable.
They seek to conduct interviews with open-source practitioners familiar with adjusting and utilizing CoC to foster a welcoming environment, as well as those experienced in addressing CoC violations. The objective is to gain insights into current CoC usage, identify challenges, and suggest optimal practices.
The study has received approval from the Research Ethics Board (REB) at UofT and the detailed study description is provided below for your reference. If you are interested and available for an interview, we would really appreciate it if you could fill out this [sign-up form] before our interview to make the discussion more efficient.
Any suggestions on where we can share this would also be appreciated. I’m also seeking to help with this, particularly with the resource side.
One quick suggestion around your question: “To which gender identity do you most identify?”
You have a list of single-select options, that include transgender, male, female etc which are not the same category to list . Since Codes of Conduct it’s really important to get identity right, suggest looking at these standards and best practices:
inclusion/data-metrics/surveys/en/gender-identity.md at master · mozilla/inclusion (github.com)
Thank you so much for the correction. This is really helpful!
I have updated the question and also referred to the best practices:
I already did a similar interview with Molly de Blanc who is currently at Northwestern doing research on CoC in open source. If they aren’t connected, these researchers might want to engage one another.
Anyway, I filled out the form. My perspectives on CoC are unlike everyone else’s (I hesitate to always claim that, but Molly said it herself without my prompting). More on that some day, not going to get into it here. But suffice to say: I gave a talk in 2019 on CoC and restorative-justice/transformative-justice which was oblique to the views of most others then, and my views have evolved a lot farther since. I was thinking about CoC and prioritizing it when pretty close to zero projects had them (back in 2012), and I think the common ones do not apply as much as they should but are then too punitive and top-down rather than restorative. Better to intervene sooner but to have the consequences be only learning and appreciation and not some black mark on community standing. I’m at least glad that there are mechanisms now for blocking the more egregious harms though.
This reference from the sing up form:
Study shows that the usage of Code of Conduct (CoC) has positive effects on productivity, bringing together different perspectives; improving outcomes, innovation, and problem-solving capacity; and leading to a healthier work environment.
For this to be ethical in scientific community, should be accompanied by a reference to the aforementioned study.
Thank you Aaron for the pointer! I will contact Molly soon
And looking forward to talking to you!
Thanks for the suggestion! I have updated the references in the sign-up form.
Thanks for the links.
In the second referenced paper https://par.nsf.gov/servlets/purl/10347026 ( R. Li, P. Pandurangan, H. Frluckaj, and L. Dabbish, “Code of Conduct Conversations in Open Source Software Projects on Github,” Proc. ACM Human-Computer Interact., vol. 5, no. CSCW1, pp. 1–31, 2021.) I didn’t find the claim that “the usage of Code of Conduct (CoC) has positive effects on productivity” neither in abstract nor in conclusion.
The first paper’s https://mcis.cs.queensu.ca/publications/2017/saner.pdf ( P. Tourani, B. Adams, and A. Serebrenik, “Code of conduct in open source projects,” SANER 2017 - 24th IEEE Int. Conf. Softw. Anal. Evol. Reengineering, pp. 24–33, 2017.) abstract and conclusion doesn’t support that claim either.
So this invitation to research study looks loaded/biased to me.
Incidentally, I urge everyone in this community to always mark proprietary software when suggested or used in related context. I understand there are so many practical and systemic reasons as to why people at various institutions end up using Zoom and Google Forms and other proprietary tools (often because someone else at the institution required it). However, there are FLO options, and there is a good medium between saying nothing and being a FLO-purist. The reasonable option is to acknowledge explicitly the situation.
So, I want to call out Google Forms in this case.
Let’s advocate for best-practices including the acknowledgement of using proprietary software. We should be doing consciousness-raising on technology choices even when we make practical compromises.
I appreciate your critique. To mitigate the bias, I have revised the text to present a comprehensive summary of the articles including both positive and negative results and opinions.
For this study, we do not make any assumptions about whether the CoC can enhance productivity. Our objective is to systematically investigate whether and to what extent “these solutions have an impact on the outcome of the OSS, encompassing collaboration efficiencies, code qualities, and sustainability,” as outlined in the project description.
Thanks again for your help and time.
Thank you for bringing this to our attention.
We typically use Zoom, Microsoft Survey, or Google form by default as these have been purchased by the institute.
Would you mind sharing some recommended FLO-options for Zoom and Google Form?
We are open to making a switch if it is more widely accepted and convenient for the community.
“these solutions have an impact on the outcome of the OSS, encompassing collaboration efficiencies, code qualities, and sustainability,”
I would simplify this to “these papers suggest that CoC affects OSS collaboration efficiency, quality of code and sustainability”, but it is already better than the previous. Thanks for correcting the mistake.
First, I want to reiterate the consciousness-raising point. Ideally, everyone is thinking about this issue because it is being stated regularly. So, even if nobody knows of FLO tools to switch to, it is best practice to explicitly note the tools. So, marking that Zoom, MS Survey, and Google Forms are proprietary is a huge step. There can be then more times to prompt the question of whether anyone knows of other options. Where options are not available, this is at least raising the issue for the community to be thinking about, hopefully encouraging work on creating new options.
That said, what I know of options in this case:
Jitsi Meet is a 100% uncompromised FLO alternative to Zoom that is almost completely on par (and even arguably some advantages). It doesn’t have some Zoom functions like live multilingual translation function but even that can have workarounds. For your purposes, it probably is all you need. And if institutions like yours actually paid the Jitsi team the sort of amounts that currently go to Zoom, it could easily surpass Zoom’s functions. It’s usable freely as is though. Besides the main site, your institution can host its own Jitsi with settings optimized to your needs.
There’s also https://bigbluebutton.org which is decent and some differences. Jitsi is the most comparable to Zoom though.
For Google Forms and MS Survey, there are many FLO options, forms are relatively trivial. From a quick search, it looks like the go-to for many people is NextCloud Forms part of the many functions in NextCloud. There’s an older tool called LimeSurvey. And https://ohmyform.com/ and Yakforms and Home | LiberaForms all look promising. Various NoCode style tools (unfortunately most seem open-core rather than fully FLO though) do all sorts of forms and functions. Maybe others can speak to these options, I have less experience here compared to my knowledge of Jitsi.
Thank you so much for the informative reply. it’s truly helpful.
I’ll definitely try them out soon.
Thanks, @emmairwin, for noting the gender list! I really appreciate that and I should’ve caught it.
And thanks @wolftune for mentioning your past study with Molly de Blanc! I don’t think your views on CoCs is totally different from everyone else’s. I agree that most CoC’s aren’t restorative and that they are not always the best option. As for tool choice: I agree that acknowledgement is nice. We don’t have a policy of mandating acknowledgement of non-FLO tools, but I do note that at least for the podcast.
Personally, I have not found Jitsi or BBB to be performant at scale for things like this where participants are coming from different devices. That having been said - @shuiblue, the Software Freedom Conservancy has a BBB instance they are happy with us using if anyone wants to use that instead of Zoom for a given interview (like you, @wolftune?). I can talk to you about how to set that up.
Did I say “totally”? I have tons of overlapping views with some others, but I’ve talked to a lot of people and seen the general ideas, and very few talk about the topic the way I do. Yes, every view I have is available somewhere out there. Most of what I think came from other places (incidentally, one of the sources within the FLO world that I have referenced often is this article by Christie Koehler Adopting a code of conduct is an adaptive challenge not a technical one - Authentic Engine which doesn’t cover my views but is a part of them) Anyway, the main reason I’m not more loudly talking about the details is because my views are still evolving, and the current rate looks like I might have something ready to be more noisy about in a couple years.
And just to avoid any confusion: I did not do any study with Molly, I just did the same sort of interview for her and her research as I will be doing this week with @shuiblue . My own work has not been in the form of academic research.
We don’t have a policy of mandating acknowledgement of non-FLO tools
I tend to advocate for conscious voluntary social norms rather than imposed policies (at least for sure when I’m first bringing up an idea at all), but if a community thoughtfully decides with something like consensus to have such a mandate (which I would suggest fits in a CoC if it is a real mandate), I could see value in it.
The Open Source way is to at least try to report the problem. And at most share the findings in public. Have you tried that?
I’d prefer to focus on this thread here – the study of CoCs in open source – and not on the license or best use of the tooling. I find that conversations around what tools to use can be distracting from the original conversation.
Yes, spin up about Jitsi and BBB discussion into a new topic. You have all rights to do that. I don’t.
I will bring this back toward on topic! (though this is tangent to the specific study being announced, and if better moved to a new topic here, that’s okay with me)
One of my views about CoC is that they should include things about respecting conversation topics! I think CoC’s should be subtle and thorough enough to support the flagging of posts that derail topics, even entirely good-faith tangents. The result should be a response from moderators or people editing their own comments that gets the topic back on track and moves noise away. This does not block conversation, it facilitates more effective conversation.
So, a CoC can include respecting topics, moving tangents to places where they don’t disrupt a topic. And the problem is that people think CoC are so punitive/threatening (not restorative) that people overreact to being flagged for off-topic postings. I want to see CoC more readily used, more of us experiencing getting flagged, experiencing the CoC process (I know I have ways to improve, and I want a CoC that results in me getting flagged more!), and thus able to have personal insight and feedback to continue iterating and improving a CoC. If the CoC only applies when things get bad, it’s not active enough and not present enough, and it reinforces the unhealthy impression that it’s a threatening escalation to bring up the CoC. (And if the CoC is only enforced by moderators rather than an interactive process in the community itself, then it is escalation — it is calling in the police. So, the resolution process needs to be not led by authority but by just helpful facilitation and tooling. Authority can step when the regular process is failing).